首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Dynamic Multi-Resource Fair Allocation with Elastic Demands 具有弹性需求的动态多资源公平分配
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-27 DOI: 10.1007/s10723-024-09754-6
Hao Guo, Weidong Li

In this paper, we study dynamic multi-resource maximin share fair allocation based on the elastic demands of users in a cloud computing system. In this problem, users do not stay in the computing system all the time. Users are assigned resources only if they stay in the system. To further improve the utilization of resources, the model in this paper allows users to dynamically select the method of processing tasks based on the resources allocated to each time slot. For this problem, we propose a mechanism called maximin share fairness with elastic demands (MMS-ED) in a cloud computing system. We prove theoretically that the allocation returned by the mechanism is a Lorenz-dominating allocation, that the allocation satisfies the cumulative maximin share fairness, and that the mechanism is Pareto efficiency, proportionality, and strategy-proofness. Within a specific setting, MMS-ED performs better, and it also satisfies another desirable property weighted envy-freeness. In addition, we designed an algorithm to realize this mechanism, conducted simulation experiments with Alibaba cluster traces, and we analyzed the impact from three perspectives of elastic demand and cumulative fairness. The experimental results show that the MMS-ED mechanism performs better than do the other three similar mechanisms in terms of resource utilization and user utility; moreover, the introduction of elastic demand and cumulative fairness can effectively improve resource utilization.

本文研究了云计算系统中基于用户弹性需求的动态多资源最大共享公平分配问题。在这个问题中,用户不会一直待在计算系统中。只有当用户留在系统中时,才会为其分配资源。为了进一步提高资源利用率,本文中的模型允许用户根据分配给每个时隙的资源动态选择处理任务的方法。针对这一问题,我们在云计算系统中提出了一种称为弹性需求最大化共享公平性(MMS-ED)的机制。我们从理论上证明了该机制返回的分配是洛伦兹主导分配,该分配满足累积最大化份额公平性,并且该机制具有帕累托效率、比例性和策略防错性。在特定情况下,MMS-ED 的表现更好,而且它还满足另一个理想的加权无嫉妒属性。此外,我们还设计了实现该机制的算法,利用阿里巴巴集群痕迹进行了仿真实验,并从弹性需求和累积公平性三个角度分析了其影响。实验结果表明,MMS-ED 机制在资源利用率和用户效用方面的表现优于其他三种类似机制;此外,引入弹性需求和累积公平性可以有效提高资源利用率。
{"title":"Dynamic Multi-Resource Fair Allocation with Elastic Demands","authors":"Hao Guo, Weidong Li","doi":"10.1007/s10723-024-09754-6","DOIUrl":"https://doi.org/10.1007/s10723-024-09754-6","url":null,"abstract":"<p>In this paper, we study dynamic multi-resource maximin share fair allocation based on the elastic demands of users in a cloud computing system. In this problem, users do not stay in the computing system all the time. Users are assigned resources only if they stay in the system. To further improve the utilization of resources, the model in this paper allows users to dynamically select the method of processing tasks based on the resources allocated to each time slot. For this problem, we propose a mechanism called maximin share fairness with elastic demands (MMS-ED) in a cloud computing system. We prove theoretically that the allocation returned by the mechanism is a Lorenz-dominating allocation, that the allocation satisfies the cumulative maximin share fairness, and that the mechanism is Pareto efficiency, proportionality, and strategy-proofness. Within a specific setting, MMS-ED performs better, and it also satisfies another desirable property weighted envy-freeness. In addition, we designed an algorithm to realize this mechanism, conducted simulation experiments with Alibaba cluster traces, and we analyzed the impact from three perspectives of elastic demand and cumulative fairness. The experimental results show that the MMS-ED mechanism performs better than do the other three similar mechanisms in terms of resource utilization and user utility; moreover, the introduction of elastic demand and cumulative fairness can effectively improve resource utilization.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140004158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles 基于分布式深度强化学习遗传优化算法的车联网联合任务卸载
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-26 DOI: 10.1007/s10723-024-09741-x
Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu

The growing number of individual vehicles and intelligent transportation systems have accelerated the development of Internet of Vehicles (IoV) technologies. The Internet of Vehicles (IoV) refers to a highly interactive network containing data regarding places, speeds, routes, and other aspects of vehicles. Task offloading was implemented to solve the issue that the current task scheduling models and tactics are primarily simplistic and do not consider the acceptable distribution of tasks, which results in a poor unloading completion rate. This work evaluates the Joint Task Offloading problem by Distributed Deep Reinforcement Learning (DDRL)-Based Genetic Optimization Algorithm (GOA). A system’s utility optimisation model is initially accomplished objectively using divisions between interaction and computation models. DDRL-GOA resolves the issue to produce the best task offloading method. The research increased job completion rates by modifying the complexity design and universal best-case scenario assurances using DDRL-GOA. Finally, empirical research is performed to validate the proposed technique in scenario development. We also construct joint task offloading, load distribution, and resource allocation to lower system costs as integer concerns. In addition to having a high convergence efficiency, the experimental results show that the proposed approach has a substantially lower system cost when compared to current methods.

越来越多的个体车辆和智能交通系统加速了车联网(IoV)技术的发展。车辆互联网(IoV)指的是一个高度交互的网络,其中包含有关地点、速度、路线和车辆其他方面的数据。目前的任务调度模型和战术主要是简单化的,没有考虑任务的可接受分布,导致卸载完成率不高,为了解决这一问题,实现了任务卸载。这项工作通过基于分布式深度强化学习(DDRL)的遗传优化算法(GOA)评估了联合任务卸载问题。系统的效用优化模型最初是通过交互模型和计算模型的划分来客观完成的。DDRL-GOA 解决了这一问题,从而产生了最佳的任务卸载方法。研究通过使用 DDRL-GOA 修改复杂性设计和通用最佳情况保证,提高了任务完成率。最后,我们进行了实证研究,以验证所提出的情景开发技术。我们还构建了联合任务卸载、负载分配和资源分配以降低系统成本的整数关注点。除了具有较高的收敛效率外,实验结果表明,与现有方法相比,建议的方法大大降低了系统成本。
{"title":"Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles","authors":"Hulin Jin, Yong-Guk Kim, Zhiran Jin, Chunyang Fan, Yonglong Xu","doi":"10.1007/s10723-024-09741-x","DOIUrl":"https://doi.org/10.1007/s10723-024-09741-x","url":null,"abstract":"<p>The growing number of individual vehicles and intelligent transportation systems have accelerated the development of Internet of Vehicles (IoV) technologies. The Internet of Vehicles (IoV) refers to a highly interactive network containing data regarding places, speeds, routes, and other aspects of vehicles. Task offloading was implemented to solve the issue that the current task scheduling models and tactics are primarily simplistic and do not consider the acceptable distribution of tasks, which results in a poor unloading completion rate. This work evaluates the Joint Task Offloading problem by Distributed Deep Reinforcement Learning (DDRL)-Based Genetic Optimization Algorithm (GOA). A system’s utility optimisation model is initially accomplished objectively using divisions between interaction and computation models. DDRL-GOA resolves the issue to produce the best task offloading method. The research increased job completion rates by modifying the complexity design and universal best-case scenario assurances using DDRL-GOA. Finally, empirical research is performed to validate the proposed technique in scenario development. We also construct joint task offloading, load distribution, and resource allocation to lower system costs as integer concerns. In addition to having a high convergence efficiency, the experimental results show that the proposed approach has a substantially lower system cost when compared to current methods.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139969509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized AI-Based Task Distribution on Blockchain for Cloud Industrial Internet of Things 云工业物联网区块链上基于人工智能的去中心化任务分配
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-24 DOI: 10.1007/s10723-024-09751-9
Amir Javadpour, Arun Kumar Sangaiah, Weizhe Zhang, Ankit Vidyarthi, HamidReza Ahmadi

This study presents an environmentally friendly mechanism for task distribution designed explicitly for blockchain Proof of Authority (POA) consensus. This approach facilitates the selection of virtual machines for tasks such as data processing, transaction verification, and adding new blocks to the blockchain. Given the current lack of effective methods for integrating POA blockchain into the Cloud Industrial Internet of Things (CIIoT) due to their inefficiency and low throughput, we propose a novel algorithm that employs the Dynamic Voltage and Frequency Scaling (DVFS) technique, replacing the periodic transaction authentication process among validator candidates. Managing computer power consumption becomes a critical concern, especially within the Internet of Things ecosystem, where device power is constrained, and transaction scalability is crucial. Virtual machines must validate transactions (tasks) within specific time frames and deadlines. The DVFS technique efficiently reduces power consumption by intelligently scheduling and allocating tasks to virtual machines. Furthermore, we leverage artificial intelligence and neural networks to match tasks with suitable virtual machines. The simulation results demonstrate that our proposed approach harnesses migration and DVFS strategies to optimize virtual machine utilization, resulting in decreased energy and power consumption compared to non-DVFS methods. This achievement marks a significant stride towards seamlessly integrating blockchain and IoT, establishing an ecologically sustainable network. Our approach boasts additional benefits, including decentralization, enhanced data quality, and heightened security. We analyze simulation runtime and energy consumption in a comprehensive evaluation against existing techniques such as WPEG, IRMBBC, and BEMEC. The findings underscore the efficiency of our technique (LBDVFSb) across both criteria.

本研究提出了一种任务分配的环境友好型机制,该机制专门为区块链授权证明(POA)共识而设计。这种方法有助于为数据处理、交易验证和向区块链添加新区块等任务选择虚拟机。由于效率低、吞吐量小,目前缺乏将 POA 区块链集成到云工业物联网(CIIoT)中的有效方法,有鉴于此,我们提出了一种采用动态电压和频率扩展(DVFS)技术的新型算法,以取代验证器候选者之间的定期交易验证过程。管理计算机功耗已成为一个关键问题,尤其是在物联网生态系统中,设备功耗有限,而事务的可扩展性至关重要。虚拟机必须在特定的时间框架和期限内验证事务(任务)。DVFS 技术通过智能调度和分配任务给虚拟机,有效降低了功耗。此外,我们还利用人工智能和神经网络将任务与合适的虚拟机相匹配。仿真结果表明,与非 DVFS 方法相比,我们提出的方法利用迁移和 DVFS 策略优化了虚拟机利用率,从而降低了能耗和功耗。这一成就标志着在无缝整合区块链和物联网、建立生态可持续网络方面迈出了重要一步。我们的方法还具有其他优势,包括去中心化、提高数据质量和安全性。我们在与 WPEG、IRMBBC 和 BEMEC 等现有技术的综合评估中分析了模拟运行时间和能耗。评估结果凸显了我们的技术(LBDVFSb)在这两个标准上的效率。
{"title":"Decentralized AI-Based Task Distribution on Blockchain for Cloud Industrial Internet of Things","authors":"Amir Javadpour, Arun Kumar Sangaiah, Weizhe Zhang, Ankit Vidyarthi, HamidReza Ahmadi","doi":"10.1007/s10723-024-09751-9","DOIUrl":"https://doi.org/10.1007/s10723-024-09751-9","url":null,"abstract":"<p>This study presents an environmentally friendly mechanism for task distribution designed explicitly for blockchain Proof of Authority (POA) consensus. This approach facilitates the selection of virtual machines for tasks such as data processing, transaction verification, and adding new blocks to the blockchain. Given the current lack of effective methods for integrating POA blockchain into the Cloud Industrial Internet of Things (CIIoT) due to their inefficiency and low throughput, we propose a novel algorithm that employs the Dynamic Voltage and Frequency Scaling (DVFS) technique, replacing the periodic transaction authentication process among validator candidates. Managing computer power consumption becomes a critical concern, especially within the Internet of Things ecosystem, where device power is constrained, and transaction scalability is crucial. Virtual machines must validate transactions (tasks) within specific time frames and deadlines. The DVFS technique efficiently reduces power consumption by intelligently scheduling and allocating tasks to virtual machines. Furthermore, we leverage artificial intelligence and neural networks to match tasks with suitable virtual machines. The simulation results demonstrate that our proposed approach harnesses migration and DVFS strategies to optimize virtual machine utilization, resulting in decreased energy and power consumption compared to non-DVFS methods. This achievement marks a significant stride towards seamlessly integrating blockchain and IoT, establishing an ecologically sustainable network. Our approach boasts additional benefits, including decentralization, enhanced data quality, and heightened security. We analyze simulation runtime and energy consumption in a comprehensive evaluation against existing techniques such as WPEG, IRMBBC, and BEMEC. The findings underscore the efficiency of our technique (LBDVFSb) across both criteria.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework 多队列雾系统中的概率截止时间感知应用卸载:最大熵框架
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-22 DOI: 10.1007/s10723-024-09753-7

Abstract

Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.

摘要 云计算及其衍生产品(如雾计算和边缘计算)推动了物联网时代的到来,并将人工智能和深度学习整合到流程自动化中。尽管在医疗保健、教育和自动化领域取得了变革性增长,但挑战依然存在,特别是在解决多跳公共网络对数据上传时间的影响方面,影响响应时间、故障率和安全性。现有的调度算法针对截止日期、优先级、到达率和到达模式等多个参数进行设计,可以最大限度地缩短高优先级应用的执行时间。然而,难点在于如何在减少低优先级应用的资源耗尽问题的同时,最大限度地缩短整体应用的执行时间。本文结合联合概率、损失概率和最大熵概念,介绍了一种基于云雾的计算架构,以解决雾节点资源匮乏问题。建议的模型采用概率应用调度算法,考虑优先级和截止日期,并利用预期损失概率进行任务卸载。此外,第二种算法重点关注资源饥饿问题,优化任务序列,以实现最短响应时间,提高多队列雾系统的服务质量。论文表明,所提出的模型优于最先进的模型,通过有效的资源分配,服务质量提高了 3.43-5.71%,响应时间缩短了 99.75-267.68 毫秒。
{"title":"A Probabilistic Deadline-aware Application Offloading in a Multi-Queueing Fog System: A Max Entropy Framework","authors":"","doi":"10.1007/s10723-024-09753-7","DOIUrl":"https://doi.org/10.1007/s10723-024-09753-7","url":null,"abstract":"<h3>Abstract</h3> <p>Cloud computing and its derivatives, such as fog and edge computing, have propelled the IoT era, integrating AI and deep learning for process automation. Despite transformative growth in healthcare, education, and automation domains, challenges persist, particularly in addressing the impact of multi-hopping public networks on data upload time, affecting response time, failure rates, and security. Existing scheduling algorithms, designed for multiple parameters like deadline, priority, rate of arrival, and arrival pattern, can minimize execution time for high-priority applications. However, the difficulty lies in simultaneously minimizing overall application execution time while mitigating resource depletion issues for low-priority applications. This paper introduces a cloud-fog-based computing architecture to tackle fog node resource starvation, incorporating joint probability, loss probability, and maximum entropy concepts. The proposed model utilizes a probabilistic application scheduling algorithm, considering priority and deadline and employing expected loss probability for task offloading. Additionally, a second algorithm focuses on resource starvation, optimizing task sequence for minimal response time and improved quality of service in a multi-Queueing fog system. The paper demonstrates that the proposed model outperforms state-of-the-art models, achieving a 3.43-5.71% quality of service improvement and a 99.75-267.68 msec reduction in response time through efficient resource allocation.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Employing RNN and Petri Nets to Secure Edge Computing Threats in Smart Cities 利用 RNN 和 Petri 网防范智能城市中的边缘计算威胁
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-22 DOI: 10.1007/s10723-023-09733-3

Abstract

The Industrial Internet of Things (IIoT) revolution has led to the development a potential system that enhances communication among a city's assets. This system relies on wireless connections to numerous limited gadgets deployed throughout the urban landscape. However, technology has exposed these networks to various harmful assaults, cyberattacks, and potential hacker threats, jeopardizing the security of wireless information transmission. Specifically, unprotected IIoT networks act as vulnerable backdoor entry points for potential attacks. To address these challenges, this project proposes a comprehensive security structure that combines Extreme Learning Machines based Replicator Neural Networks (ELM-RNN) with Deep Reinforcement Learning based Deep Q-Networks (DRL-DQN) to safeguard against edge computing risks in intelligent cities. The proposed system starts by introducing a distributed authorization mechanism that employs an established trust paradigm to effectively regulate data flows within the network. Furthermore, a novel framework called Secure Trust-Aware Philosopher Privacy and Authentication (STAPPA), modeled using Petri Net, mitigates network privacy breaches and enhances data protection. The system employs the Garson algorithm alongside the ELM-based RNN to optimize network performance and strengthen anomaly detection capabilities. This enables efficient determination of the shortest routes, accurate anomaly detection, and effective search optimization within the network environment. Through extensive simulation, the proposed security framework demonstrates remarkable detection and accuracy rates by leveraging the power of reinforcement learning.

摘要 工业物联网(IIoT)革命促使开发了一种潜在的系统,以加强城市资产之间的通信。该系统依赖于与部署在城市各处的众多有限小工具的无线连接。然而,技术使这些网络面临各种有害攻击、网络攻击和潜在的黑客威胁,从而危及无线信息传输的安全性。具体来说,未受保护的物联网网络是潜在攻击的脆弱后门入口。为应对这些挑战,本项目提出了一种综合安全结构,将基于极限学习机的复制器神经网络(ELM-RNN)与基于深度强化学习的深度 Q 网络(DRL-DQN)相结合,以防范智慧城市中的边缘计算风险。拟议的系统首先引入了分布式授权机制,该机制采用既定的信任范式来有效规范网络内的数据流。此外,一个名为 "安全信任感知哲学家隐私和认证(STAPPA)"的新型框架采用 Petri 网建模,可减轻网络隐私泄露并加强数据保护。该系统采用了 Garson 算法和基于 ELM 的 RNN,以优化网络性能并加强异常检测能力。这样就能在网络环境中高效确定最短路径、准确检测异常并有效优化搜索。通过大量仿真,所提出的安全框架利用强化学习的强大功能,展示了出色的检测率和准确率。
{"title":"Employing RNN and Petri Nets to Secure Edge Computing Threats in Smart Cities","authors":"","doi":"10.1007/s10723-023-09733-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09733-3","url":null,"abstract":"<h3>Abstract</h3> <p>The Industrial Internet of Things (IIoT) revolution has led to the development a potential system that enhances communication among a city's assets. This system relies on wireless connections to numerous limited gadgets deployed throughout the urban landscape. However, technology has exposed these networks to various harmful assaults, cyberattacks, and potential hacker threats, jeopardizing the security of wireless information transmission. Specifically, unprotected IIoT networks act as vulnerable backdoor entry points for potential attacks. To address these challenges, this project proposes a comprehensive security structure that combines Extreme Learning Machines based Replicator Neural Networks (ELM-RNN) with Deep Reinforcement Learning based Deep Q-Networks (DRL-DQN) to safeguard against edge computing risks in intelligent cities. The proposed system starts by introducing a distributed authorization mechanism that employs an established trust paradigm to effectively regulate data flows within the network. Furthermore, a novel framework called Secure Trust-Aware Philosopher Privacy and Authentication (STAPPA), modeled using Petri Net, mitigates network privacy breaches and enhances data protection. The system employs the Garson algorithm alongside the ELM-based RNN to optimize network performance and strengthen anomaly detection capabilities. This enables efficient determination of the shortest routes, accurate anomaly detection, and effective search optimization within the network environment. Through extensive simulation, the proposed security framework demonstrates remarkable detection and accuracy rates by leveraging the power of reinforcement learning.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge Computing Empowered Smart Healthcare: Monitoring and Diagnosis with Deep Learning Methods 边缘计算助力智能医疗:利用深度学习方法进行监测和诊断
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-21 DOI: 10.1007/s10723-023-09726-2

Abstract

Nowadays, data syncing before switchover and migration are two of the most pressing issues confronting cloud-based architecture. The requirement for a centrally managed IoT-based infrastructure has limited scalability due to security problems with cloud computing. The fundamental factor is that health systems, such as health monitoring, etc., demand computational operations on large amounts of data, which leads to the sensitivity of device latency emerging during these systems. Fog computing is a novel approach to increasing the effectiveness of cloud computing by allowing the use of necessary resources and close to end users. Existing fog computing approaches still have several drawbacks, including the tendency to either overestimate reaction time or consider result correctness, but managing both at once compromises system compatibility. To focus on deep learning algorithms and automated monitoring, FETCH is a proposed framework that connects with edge computing devices. It provides a constructive framework for real-life healthcare systems, such as those treating heart disease and other conditions. The suggested fog-enabled cloud computing system uses FogBus, which exhibits benefits in terms of power consumption, communication bandwidth, oscillation, delay, execution duration, and correctness.

摘要 如今,切换前的数据同步和迁移是云架构面临的两个最紧迫的问题。由于云计算存在安全问题,对集中管理的物联网基础设施的要求限制了其可扩展性。最根本的因素是,健康监测等健康系统需要对大量数据进行计算操作,这导致在这些系统中出现设备延迟的敏感性。雾计算是一种提高云计算效率的新方法,它允许使用必要的资源并接近终端用户。现有的雾计算方法仍存在一些缺点,包括倾向于高估反应时间或考虑结果的正确性,但同时管理这两种情况会影响系统的兼容性。为了专注于深度学习算法和自动监控,FETCH 是一个连接边缘计算设备的拟议框架。它为现实生活中的医疗保健系统(如治疗心脏病和其他疾病的系统)提供了一个建设性框架。建议的雾化云计算系统使用 FogBus,它在功耗、通信带宽、振荡、延迟、执行持续时间和正确性方面都有优势。
{"title":"Edge Computing Empowered Smart Healthcare: Monitoring and Diagnosis with Deep Learning Methods","authors":"","doi":"10.1007/s10723-023-09726-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09726-2","url":null,"abstract":"<h3>Abstract</h3> <p>Nowadays, data syncing before switchover and migration are two of the most pressing issues confronting cloud-based architecture. The requirement for a centrally managed IoT-based infrastructure has limited scalability due to security problems with cloud computing. The fundamental factor is that health systems, such as health monitoring, etc., demand computational operations on large amounts of data, which leads to the sensitivity of device latency emerging during these systems. Fog computing is a novel approach to increasing the effectiveness of cloud computing by allowing the use of necessary resources and close to end users. Existing fog computing approaches still have several drawbacks, including the tendency to either overestimate reaction time or consider result correctness, but managing both at once compromises system compatibility. To focus on deep learning algorithms and automated monitoring, FETCH is a proposed framework that connects with edge computing devices. It provides a constructive framework for real-life healthcare systems, such as those treating heart disease and other conditions. The suggested fog-enabled cloud computing system uses FogBus, which exhibits benefits in terms of power consumption, communication bandwidth, oscillation, delay, execution duration, and correctness.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Resource Management in MEC Powered by Edge Intelligence for Smart City Internet of Things 边缘智能支持 MEC 中的动态资源管理,实现智慧城市物联网
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-13 DOI: 10.1007/s10723-024-09749-3
Xucheng Wan

The Internet of Things (IoT) has become an infrastructure that makes smart cities possible. is both accurate and efficient. The intelligent production industry 4.0 period has made mobile edge computing (MEC) essential. Computationally demanding tasks can be delegated from the MEC server to the central cloud servers for processing in a smart city. This paper develops the integrated optimization framework for offloading tasks and dynamic resource allocation to reduce the power usage of all Internet of Things (IoT) gadgets subjected to delay limits and resource limitations. A Federated Learning FL-DDPG algorithm based on the Deep Deterministic Policy Gradient (DDPG) architecture is suggested for dynamic resource management in MEC networks. This research addresses the optimization issues for the CPU frequencies, transmit power, and IoT device offloading decisions for a multi-mobile edge computing (MEC) server and multi-IoT cellular networks. A weighted average of the processing load on the central MEC server (PMS), the system’s overall energy use, and the task-dropping expense is calculated as an optimization issue. The Lyapunov optimization theory formulates a random optimization strategy to reduce the energy use of IoT devices in MEC networks and reduce bandwidth assignment and transmitting power distribution. Additionally, the modeling studies demonstrate that, compared to other benchmark approaches, the suggested algorithm efficiently enhances system performance while consuming less energy.

物联网(IoT)已成为使智慧城市成为可能的基础设施。智能生产工业 4.0 时代使移动边缘计算(MEC)变得至关重要。在智慧城市中,计算要求高的任务可从 MEC 服务器下放至中央云服务器进行处理。本文开发了用于卸载任务和动态资源分配的集成优化框架,以减少所有受延迟限制和资源限制的物联网(IoT)小工具的功耗。针对 MEC 网络中的动态资源管理,提出了一种基于深度确定性策略梯度(DDPG)架构的联合学习 FL-DDPG 算法。这项研究解决了多移动边缘计算(MEC)服务器和多物联网蜂窝网络的 CPU 频率、发射功率和物联网设备卸载决策的优化问题。作为一个优化问题,计算了中央 MEC 服务器(PMS)的处理负载、系统的总体能耗和任务卸载费用的加权平均值。李亚普诺夫优化理论提出了一种随机优化策略,以降低 MEC 网络中物联网设备的能耗,减少带宽分配和发射功率分配。此外,建模研究表明,与其他基准方法相比,所建议的算法能有效提高系统性能,同时能耗更低。
{"title":"Dynamic Resource Management in MEC Powered by Edge Intelligence for Smart City Internet of Things","authors":"Xucheng Wan","doi":"10.1007/s10723-024-09749-3","DOIUrl":"https://doi.org/10.1007/s10723-024-09749-3","url":null,"abstract":"<p>The Internet of Things (IoT) has become an infrastructure that makes smart cities possible. is both accurate and efficient. The intelligent production industry 4.0 period has made mobile edge computing (MEC) essential. Computationally demanding tasks can be delegated from the MEC server to the central cloud servers for processing in a smart city. This paper develops the integrated optimization framework for offloading tasks and dynamic resource allocation to reduce the power usage of all Internet of Things (IoT) gadgets subjected to delay limits and resource limitations. A Federated Learning FL-DDPG algorithm based on the Deep Deterministic Policy Gradient (DDPG) architecture is suggested for dynamic resource management in MEC networks. This research addresses the optimization issues for the CPU frequencies, transmit power, and IoT device offloading decisions for a multi-mobile edge computing (MEC) server and multi-IoT cellular networks. A weighted average of the processing load on the central MEC server (PMS), the system’s overall energy use, and the task-dropping expense is calculated as an optimization issue. The Lyapunov optimization theory formulates a random optimization strategy to reduce the energy use of IoT devices in MEC networks and reduce bandwidth assignment and transmitting power distribution. Additionally, the modeling studies demonstrate that, compared to other benchmark approaches, the suggested algorithm efficiently enhances system performance while consuming less energy.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dependent Task Scheduling Using Parallel Deep Neural Networks in Mobile Edge Computing 在移动边缘计算中使用并行深度神经网络调度依赖性任务
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-12 DOI: 10.1007/s10723-024-09744-8
Sheng Chai, Jimmy Huang

Conventional detection techniques aimed at intelligent devices rely primarily on deep learning algorithms, which, despite their high precision, are hindered by significant computer power and energy requirements. This work proposes a novel solution to these constraints using mobile edge computing (MEC). We present the Dependent Task-Offloading technique (DTOS), a deep reinforcement learning-based technique for optimizing task offloading to numerous heterogeneous edge servers in intelligent prosthesis applications. By expressing the task offloading problem as a Markov decision process, DTOS addresses the dual challenge of lowering network service latency and power utilisation. DTOS employs a weighted sum optimisation method in this approach to find the best policy. The technique uses parallel deep neural networks (DNNs), which not only create offloading possibilities but also cache the most successful options for further iterations. Furthermore, the DTOS modifies DNN variables using a prioritized experience replay method, which improves learning by focusing on valuable experiences. The use of DTOS in a real-world MEC scenario, where a deep learning-based movement intent detection algorithm is deployed on intelligent prostheses, demonstrates its applicability and effectiveness. The experimental results show that DTOS consistently makes optimal decisions in work offloading and planning, demonstrating its potential to improve the operational efficiency of intelligent prostheses significantly. Thus, the study introduces a novel approach that combines the characteristics of deep reinforcement learning with MEC, demonstrating a substantial development in the field of intelligent prostheses through optimal task offloading and reduced resource usage.

针对智能设备的传统检测技术主要依赖于深度学习算法,尽管这种算法精度很高,但却受到大量计算机功耗和能耗要求的阻碍。本研究提出了一种利用移动边缘计算(MEC)解决这些限制的新方案。我们提出了 "依赖任务卸载技术"(DTOS),这是一种基于深度强化学习的技术,用于在智能假肢应用中优化向众多异构边缘服务器的任务卸载。通过将任务卸载问题表达为马尔可夫决策过程,DTOS 解决了降低网络服务延迟和功率利用率的双重挑战。DTOS 在此方法中采用了加权和优化方法,以找到最佳策略。该技术使用并行深度神经网络(DNN),它不仅能创造卸载可能性,还能缓存最成功的选项,以便进一步迭代。此外,DTOS 还使用优先级经验重放法修改 DNN 变量,通过关注有价值的经验来提高学习效果。在真实世界的 MEC 场景中,基于深度学习的运动意图检测算法被部署到智能假肢上,DTOS 的使用证明了它的适用性和有效性。实验结果表明,DTOS 始终能在工作卸载和规划方面做出最优决策,这表明它具有显著提高智能假肢运行效率的潜力。因此,该研究引入了一种结合了深度强化学习和 MEC 特性的新方法,通过优化任务卸载和减少资源使用,在智能假肢领域取得了长足的发展。
{"title":"Dependent Task Scheduling Using Parallel Deep Neural Networks in Mobile Edge Computing","authors":"Sheng Chai, Jimmy Huang","doi":"10.1007/s10723-024-09744-8","DOIUrl":"https://doi.org/10.1007/s10723-024-09744-8","url":null,"abstract":"<p>Conventional detection techniques aimed at intelligent devices rely primarily on deep learning algorithms, which, despite their high precision, are hindered by significant computer power and energy requirements. This work proposes a novel solution to these constraints using mobile edge computing (MEC). We present the Dependent Task-Offloading technique (DTOS), a deep reinforcement learning-based technique for optimizing task offloading to numerous heterogeneous edge servers in intelligent prosthesis applications. By expressing the task offloading problem as a Markov decision process, DTOS addresses the dual challenge of lowering network service latency and power utilisation. DTOS employs a weighted sum optimisation method in this approach to find the best policy. The technique uses parallel deep neural networks (DNNs), which not only create offloading possibilities but also cache the most successful options for further iterations. Furthermore, the DTOS modifies DNN variables using a prioritized experience replay method, which improves learning by focusing on valuable experiences. The use of DTOS in a real-world MEC scenario, where a deep learning-based movement intent detection algorithm is deployed on intelligent prostheses, demonstrates its applicability and effectiveness. The experimental results show that DTOS consistently makes optimal decisions in work offloading and planning, demonstrating its potential to improve the operational efficiency of intelligent prostheses significantly. Thus, the study introduces a novel approach that combines the characteristics of deep reinforcement learning with MEC, demonstrating a substantial development in the field of intelligent prostheses through optimal task offloading and reduced resource usage.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Task Offloading and Multi-Task Offloading Based on NOMA Enhanced Internet of Vehicles in Edge Computing 边缘计算中基于 NOMA 增强型车联网的联合任务卸载和多任务卸载
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-12 DOI: 10.1007/s10723-024-09748-4
Jie Zhao, Ahmed M. El-Sherbeeny

With the rapid development of technology, the Internet of vehicles (IoV) has become increasingly important. However, as the number of vehicles on highways increases, ensuring reliable communication between them has become a significant challenge. To address this issue, this paper proposes a novel approach that combines Non-Orthogonal Multiple Access (NOMA) with a time-optimized multitask offloading model based on Optimal Stopping Theory (OST) principles. NOMA-OST is a promising technology that can address the high volume of multiple access and the need for reliable communication in IoV. A NOMA-OST-based IoV system is proposed to meet the Vehicle-to-Vehicle (V2V) communication requirements. This approach optimizes joint task offloading and resource allocation for multiple users, tasks, and servers. NOMA enables efficient resource sharing by accommodating multiple devices, whereas OST ensures timely and intelligent task offloading decisions, resulting in improved reliability and efficiency in V2V communication within IoV, making it a highly innovative and technically robust solution. It suggests a low-complexity sub-optimal matching approach for sub-channel allocation to increase the effectiveness of offloading. Simulation results show that NOMA with OST significantly improves the system’s energy efficiency (EE) and reduces computation time. The approach also enhances the effectiveness of task offloading and resource allocation, leading to better overall system performance. The performance of NOMA with OST under V2V communication requirements in IoV is significantly improved compared to traditional orthogonal multiaccess methods. Overall, NOMA with OST is a promising technology that can address the high reliability of V2V communication requirements in IoV. It can improve system performance, and energy efficiency and reduce computation time, making it a valuable technology for IoV applications.

随着技术的快速发展,车联网(IoV)变得越来越重要。然而,随着高速公路上车辆数量的增加,确保车辆之间的可靠通信已成为一项重大挑战。为解决这一问题,本文提出了一种新方法,将非正交多址接入(NOMA)与基于最优停车理论(OST)原理的时间优化多任务卸载模型相结合。NOMA-OST 是一种很有前途的技术,可以解决物联网中的大量多路访问和可靠通信需求。本文提出了一种基于 NOMA-OST 的物联网系统,以满足车对车(V2V)通信的要求。这种方法优化了多个用户、任务和服务器的联合任务卸载和资源分配。NOMA 可通过容纳多个设备实现高效的资源共享,而 OST 可确保及时、智能的任务卸载决策,从而提高 IoV 中 V2V 通信的可靠性和效率,使其成为一种极具创新性且技术稳健的解决方案。它为子信道分配提出了一种低复杂度的次优匹配方法,以提高卸载的有效性。仿真结果表明,采用 OST 的 NOMA 能显著提高系统能效(EE)并减少计算时间。该方法还提高了任务卸载和资源分配的有效性,从而改善了系统的整体性能。与传统的正交多址方法相比,采用 OST 的 NOMA 在物联网 V2V 通信要求下的性能有了显著提高。总之,带有 OST 的 NOMA 是一种很有前途的技术,可以满足物联网中 V2V 通信的高可靠性要求。它可以提高系统性能和能效,减少计算时间,是物联网应用的一项有价值的技术。
{"title":"Joint Task Offloading and Multi-Task Offloading Based on NOMA Enhanced Internet of Vehicles in Edge Computing","authors":"Jie Zhao, Ahmed M. El-Sherbeeny","doi":"10.1007/s10723-024-09748-4","DOIUrl":"https://doi.org/10.1007/s10723-024-09748-4","url":null,"abstract":"<p>With the rapid development of technology, the Internet of vehicles (IoV) has become increasingly important. However, as the number of vehicles on highways increases, ensuring reliable communication between them has become a significant challenge. To address this issue, this paper proposes a novel approach that combines Non-Orthogonal Multiple Access (NOMA) with a time-optimized multitask offloading model based on Optimal Stopping Theory (OST) principles. NOMA-OST is a promising technology that can address the high volume of multiple access and the need for reliable communication in IoV. A NOMA-OST-based IoV system is proposed to meet the Vehicle-to-Vehicle (V2V) communication requirements. This approach optimizes joint task offloading and resource allocation for multiple users, tasks, and servers. NOMA enables efficient resource sharing by accommodating multiple devices, whereas OST ensures timely and intelligent task offloading decisions, resulting in improved reliability and efficiency in V2V communication within IoV, making it a highly innovative and technically robust solution. It suggests a low-complexity sub-optimal matching approach for sub-channel allocation to increase the effectiveness of offloading. Simulation results show that NOMA with OST significantly improves the system’s energy efficiency (EE) and reduces computation time. The approach also enhances the effectiveness of task offloading and resource allocation, leading to better overall system performance. The performance of NOMA with OST under V2V communication requirements in IoV is significantly improved compared to traditional orthogonal multiaccess methods. Overall, NOMA with OST is a promising technology that can address the high reliability of V2V communication requirements in IoV. It can improve system performance, and energy efficiency and reduce computation time, making it a valuable technology for IoV applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An IoT-based Covid-19 Healthcare Monitoring and Prediction Using Deep Learning Methods 使用深度学习方法进行基于物联网的 Covid-19 医疗监控和预测
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-09 DOI: 10.1007/s10723-024-09742-w
Jianjia Liu, Xin Yang, Tiannan Liao, Yong Hang

The Internet of Things (IoT) is developing a more significant transformation in the healthcare industry by improving patient care with reduced cost of treatments. Main aim of this research is to monitor the Covid-19 patients and report the health issues immediately using IoT. Collected data is analyzed using deep learning model. The technological advancement of sensor and mobile technologies came up with IoT-based healthcare systems. These systems are more preventive than the traditional healthcare systems. This paper developed an efficient real-time IoT-based COVID-19 monitoring and prediction system using a deep learning model. By collecting symptomatic patient data and analyzing it, the COVID-19 suspects are predicted in the early stages in a better way. The effective parameters are selected using the Modified Chicken Swarm optimization (MCSO) approach by mining the health parameters gathered from the sensors. The COVID-19 presence is computed using the hybrid Deep learning model called Convolution and graph LSTM using the desired features. (ConvGLSTM). This process includes four stages such as data collection, data analysis (feature selection), diagnostic system (DL model), and the cloud system (Storage). The developed model is experimented with using the dataset from Srinagar based on parameters such as accuracy, precision, recall, F1 score, RMSE, and AUC. Based on the outcome, the proposed model is effective and superior to the traditional approaches to the early identification of COVID-19.

物联网(IoT)通过改善患者护理和降低治疗成本,正在为医疗保健行业带来更重大的变革。这项研究的主要目的是利用物联网监测 Covid-19 病人并立即报告健康问题。收集到的数据将使用深度学习模型进行分析。传感器和移动技术的发展带来了基于物联网的医疗保健系统。与传统的医疗系统相比,这些系统更具预防性。本文利用深度学习模型开发了一个高效的基于物联网的 COVID-19 实时监测和预测系统。通过收集患者症状数据并进行分析,可以更好地在早期预测 COVID-19 嫌疑人。通过挖掘从传感器收集到的健康参数,使用改良鸡群优化(MCSO)方法选择有效参数。使用混合深度学习模型(称为卷积和图 LSTM),利用所需的特征计算 COVID-19 的存在。(ConvGLSTM)。这一过程包括四个阶段,如数据收集、数据分析(特征选择)、诊断系统(DL 模型)和云系统(存储)。根据准确率、精确度、召回率、F1 分数、RMSE 和 AUC 等参数,使用斯利那加的数据集对所开发的模型进行了实验。结果表明,在早期识别 COVID-19 方面,所提出的模型比传统方法更有效、更优越。
{"title":"An IoT-based Covid-19 Healthcare Monitoring and Prediction Using Deep Learning Methods","authors":"Jianjia Liu, Xin Yang, Tiannan Liao, Yong Hang","doi":"10.1007/s10723-024-09742-w","DOIUrl":"https://doi.org/10.1007/s10723-024-09742-w","url":null,"abstract":"<p>The Internet of Things (IoT) is developing a more significant transformation in the healthcare industry by improving patient care with reduced cost of treatments. Main aim of this research is to monitor the Covid-19 patients and report the health issues immediately using IoT. Collected data is analyzed using deep learning model. The technological advancement of sensor and mobile technologies came up with IoT-based healthcare systems. These systems are more preventive than the traditional healthcare systems. This paper developed an efficient real-time IoT-based COVID-19 monitoring and prediction system using a deep learning model. By collecting symptomatic patient data and analyzing it, the COVID-19 suspects are predicted in the early stages in a better way. The effective parameters are selected using the Modified Chicken Swarm optimization (MCSO) approach by mining the health parameters gathered from the sensors. The COVID-19 presence is computed using the hybrid Deep learning model called Convolution and graph LSTM using the desired features. (ConvGLSTM). This process includes four stages such as data collection, data analysis (feature selection), diagnostic system (DL model), and the cloud system (Storage). The developed model is experimented with using the dataset from Srinagar based on parameters such as accuracy, precision, recall, F1 score, RMSE, and AUC. Based on the outcome, the proposed model is effective and superior to the traditional approaches to the early identification of COVID-19.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1