首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Smart Financial Investor’s Risk Prediction System Using Mobile Edge Computing 基于移动边缘计算的智能金融投资者风险预测系统
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-04 DOI: 10.1007/s10723-023-09710-w
Caijun Cheng, Huazhen Huang

The financial system has reached its pinnacle because of economic and social growth, which has propelled the financial sector into another era. Public and corporate financial investment operations have significantly risen in this climate, and they now play a significant part in and impact the efficient use of market money. This finance sector will be affected by high-risk occurrences because of the cohabitation of dangers and passions, which will cause order to become unstable and definite financial losses. An organization’s operational risk is a significant barrier to its growth. A bit of negligence could cause the business’s standing to erode rapidly. Increasing funding management and forecasting risks is essential for the successful development of companies, enhancing their competitiveness in the marketplace and minimizing negative effects. As a result, this study takes the idea of mobile edge computing. It creates an intelligent system that can forecast different risks throughout the financial investment process based on the operational knowledge of important investment platforms. The CNN-LSTM approach, based on knowledge graphs, is then used to forecast financial risks. The results are then thoroughly examined through tests, demonstrating that the methodology can accurately estimate the risk associated with financial investments. Finally, a plan for improving the system for predicting financial risk is put out.

由于经济和社会的发展,金融体系达到了顶峰,这将金融业推向了另一个时代。在这种环境下,公共和企业的金融投资业务显著增加,它们现在在市场资金的有效利用方面发挥着重要作用并影响着市场资金的有效利用。由于危险和激情的共存,这一金融领域将受到高风险事件的影响,这将导致秩序变得不稳定和确定的财务损失。一个组织的操作风险是其成长的一个重要障碍。一点疏忽都可能导致企业的地位迅速受到侵蚀。加强资金管理和预测风险对于公司的成功发展,提高其在市场上的竞争力和尽量减少负面影响至关重要。因此,本研究采用了移动边缘计算的思想。它基于重要投资平台的操作知识,创建了一个智能系统,可以预测整个金融投资过程中的不同风险。基于知识图的CNN-LSTM方法随后被用于预测金融风险。然后通过测试彻底检查结果,证明该方法可以准确地估计与金融投资相关的风险。最后,提出了完善金融风险预测系统的方案。
{"title":"Smart Financial Investor’s Risk Prediction System Using Mobile Edge Computing","authors":"Caijun Cheng, Huazhen Huang","doi":"10.1007/s10723-023-09710-w","DOIUrl":"https://doi.org/10.1007/s10723-023-09710-w","url":null,"abstract":"<p>The financial system has reached its pinnacle because of economic and social growth, which has propelled the financial sector into another era. Public and corporate financial investment operations have significantly risen in this climate, and they now play a significant part in and impact the efficient use of market money. This finance sector will be affected by high-risk occurrences because of the cohabitation of dangers and passions, which will cause order to become unstable and definite financial losses. An organization’s operational risk is a significant barrier to its growth. A bit of negligence could cause the business’s standing to erode rapidly. Increasing funding management and forecasting risks is essential for the successful development of companies, enhancing their competitiveness in the marketplace and minimizing negative effects. As a result, this study takes the idea of mobile edge computing. It creates an intelligent system that can forecast different risks throughout the financial investment process based on the operational knowledge of important investment platforms. The CNN-LSTM approach, based on knowledge graphs, is then used to forecast financial risks. The results are then thoroughly examined through tests, demonstrating that the methodology can accurately estimate the risk associated with financial investments. Finally, a plan for improving the system for predicting financial risk is put out.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing 移动边缘计算中任务卸载的深度强化学习和马尔可夫决策问题
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-04 DOI: 10.1007/s10723-023-09708-4
Xiaohu Gao, Mei Choo Ang, Sara A. Althubiti

Mobile Edge Computing (MEC) offers cloud-like capabilities to mobile users, making it an up-and-coming method for advancing the Internet of Things (IoT). However, current approaches are limited by various factors such as network latency, bandwidth, energy consumption, task characteristics, and edge server overload. To address these limitations, this research propose a novel approach that integrates Deep Reinforcement Learning (DRL) with Deep Deterministic Policy Gradient (DDPG) and Markov Decision Problem for task offloading in MEC. Among DRL algorithms, the ITODDPG algorithm based on the DDPG algorithm and MDP is a popular choice for task offloading in MEC. Firstly, the ITODDPG algorithm formulates the task offloading problem in MEC as an MDP, which enables the agent to learn a policy that maximizes the expected cumulative reward. Secondly, ITODDPG employs a deep neural network to approximate the Q-function, which maps the state-action pairs to their expected cumulative rewards. Finally, the experimental results demonstrate that the ITODDPG algorithm outperforms the baseline algorithms regarding average compensation and convergence speed. In addition to its superior performance, our proposed approach can learn complex non-linear policies using DNN and an information-theoretic objective function to improve the performance of task offloading in MEC. Compared to traditional methods, our approach delivers improved performance, making it highly effective for developing IoT environments. Experimental trials were carried out, and the results indicate that the suggested approach can enhance performance compared to the other three baseline methods. It is highly scalable, capable of handling large and complex environments, and suitable for deployment in real-world scenarios, ensuring its widespread applicability to a diverse range of task offloading and MEC applications.

移动边缘计算(MEC)为移动用户提供类似云的功能,使其成为推进物联网(IoT)的一种新兴方法。然而,当前的方法受到各种因素的限制,例如网络延迟、带宽、能耗、任务特征和边缘服务器过载。为了解决这些限制,本研究提出了一种将深度强化学习(DRL)与深度确定性策略梯度(DDPG)和马尔可夫决策问题相结合的新方法,用于MEC中的任务卸载。在DRL算法中,基于DDPG算法和MDP的ITODDPG算法是MEC中任务卸载的热门选择。首先,ITODDPG算法将MEC中的任务卸载问题表述为一个MDP,使agent能够学习到期望累积奖励最大化的策略。其次,ITODDPG使用深度神经网络来近似q函数,将状态-动作对映射到它们的期望累积奖励。最后,实验结果表明,ITODDPG算法在平均补偿和收敛速度方面优于基准算法。除了其优越的性能外,我们提出的方法可以使用深度神经网络和信息论目标函数来学习复杂的非线性策略,以提高MEC中任务卸载的性能。与传统方法相比,我们的方法提供了更高的性能,使其在开发物联网环境方面非常有效。实验结果表明,与其他三种基线方法相比,该方法可以提高性能。它具有高度可扩展性,能够处理大型复杂环境,适合在实际场景中部署,确保其广泛适用于各种任务卸载和MEC应用程序。
{"title":"Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing","authors":"Xiaohu Gao, Mei Choo Ang, Sara A. Althubiti","doi":"10.1007/s10723-023-09708-4","DOIUrl":"https://doi.org/10.1007/s10723-023-09708-4","url":null,"abstract":"<p>Mobile Edge Computing (MEC) offers cloud-like capabilities to mobile users, making it an up-and-coming method for advancing the Internet of Things (IoT). However, current approaches are limited by various factors such as network latency, bandwidth, energy consumption, task characteristics, and edge server overload. To address these limitations, this research propose a novel approach that integrates Deep Reinforcement Learning (DRL) with Deep Deterministic Policy Gradient (DDPG) and Markov Decision Problem for task offloading in MEC. Among DRL algorithms, the ITODDPG algorithm based on the DDPG algorithm and MDP is a popular choice for task offloading in MEC. Firstly, the ITODDPG algorithm formulates the task offloading problem in MEC as an MDP, which enables the agent to learn a policy that maximizes the expected cumulative reward. Secondly, ITODDPG employs a deep neural network to approximate the Q-function, which maps the state-action pairs to their expected cumulative rewards. Finally, the experimental results demonstrate that the ITODDPG algorithm outperforms the baseline algorithms regarding average compensation and convergence speed. In addition to its superior performance, our proposed approach can learn complex non-linear policies using DNN and an information-theoretic objective function to improve the performance of task offloading in MEC. Compared to traditional methods, our approach delivers improved performance, making it highly effective for developing IoT environments. Experimental trials were carried out, and the results indicate that the suggested approach can enhance performance compared to the other three baseline methods. It is highly scalable, capable of handling large and complex environments, and suitable for deployment in real-world scenarios, ensuring its widespread applicability to a diverse range of task offloading and MEC applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Prediction of Makespan Matrix Workflow Scheduling Algorithm for Heterogeneous Cloud Environments 异构云环境下Makespan矩阵工作流调度算法的高效预测
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-28 DOI: 10.1007/s10723-023-09711-9
Longxin Zhang, Minghui Ai, Runti Tan, Junfeng Man, Xiaojun Deng, Keqin Li

Leveraging a cloud computing environment for executing workflow applications offers high flexibility and strong scalability, thereby significantly improving resource utilization. Current scholarly discussions heavily focus on effectively reducing the scheduling length (makespan) of parallel task sets and improving the efficiency of large workflow applications in cloud computing environments. Effectively managing task dependencies and execution sequences plays a crucial role in designing efficient workflow scheduling algorithms. This study forwards a high-efficiency workflow scheduling algorithm based on predict makespan matrix (PMMS) for heterogeneous cloud computing environments. First, PMMS calculates the priority of each task based on the predict makespan (PM) matrix and obtains the task scheduling list. Second, the optimistic scheduling length (OSL) value of each task is calculated based on the PM matrix and the earliest finish time. Third, the best virtual machine is selected for each task according to the minimum OSL value. A large number of substantial experiments show that the scheduling length of workflow for PMMS, compared with state-of-the-art HEFT, PEFT, and PPTS algorithms, is reduced by 6.84%–15.17%, 5.47%–11.39%, and 4.74%–17.27%, respectively. This hinges on the premise of ensuring priority constraints and not increasing the time complexity.

利用云计算环境执行工作流应用程序提供了高灵活性和强大的可扩展性,从而显著提高了资源利用率。当前的学术讨论主要集中在有效地减少并行任务集的调度长度(makespan)和提高云计算环境下大型工作流应用程序的效率。有效地管理任务依赖关系和执行顺序对于设计高效的工作流调度算法至关重要。针对异构云计算环境,提出了一种基于预测最大跨度矩阵(PMMS)的高效工作流调度算法。PMMS首先根据预测最大寿命矩阵计算各任务的优先级,得到任务调度列表;其次,根据PM矩阵和最早完成时间计算各任务的最优调度长度(OSL)值;第三,根据最小OSL值为每个任务选择最佳虚拟机。大量实验结果表明,PMMS的工作流调度长度比HEFT、PEFT和PPTS算法分别缩短了6.84% ~ 15.17%、5.47% ~ 11.39%和4.74% ~ 17.27%。这取决于保证优先级约束和不增加时间复杂度的前提。
{"title":"Efficient Prediction of Makespan Matrix Workflow Scheduling Algorithm for Heterogeneous Cloud Environments","authors":"Longxin Zhang, Minghui Ai, Runti Tan, Junfeng Man, Xiaojun Deng, Keqin Li","doi":"10.1007/s10723-023-09711-9","DOIUrl":"https://doi.org/10.1007/s10723-023-09711-9","url":null,"abstract":"<p>Leveraging a cloud computing environment for executing workflow applications offers high flexibility and strong scalability, thereby significantly improving resource utilization. Current scholarly discussions heavily focus on effectively reducing the scheduling length (makespan) of parallel task sets and improving the efficiency of large workflow applications in cloud computing environments. Effectively managing task dependencies and execution sequences plays a crucial role in designing efficient workflow scheduling algorithms. This study forwards a high-efficiency workflow scheduling algorithm based on predict makespan matrix (PMMS) for heterogeneous cloud computing environments. First, PMMS calculates the priority of each task based on the predict makespan (PM) matrix and obtains the task scheduling list. Second, the optimistic scheduling length (OSL) value of each task is calculated based on the PM matrix and the earliest finish time. Third, the best virtual machine is selected for each task according to the minimum OSL value. A large number of substantial experiments show that the scheduling length of workflow for PMMS, compared with state-of-the-art HEFT, PEFT, and PPTS algorithms, is reduced by 6.84%–15.17%, 5.47%–11.39%, and 4.74%–17.27%, respectively. This hinges on the premise of ensuring priority constraints and not increasing the time complexity.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sustainable Environmental Design Using Green IOT with Hybrid Deep Learning and Building Algorithm for Smart City 基于混合深度学习和建筑算法的绿色物联网可持续环境设计
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-27 DOI: 10.1007/s10723-023-09704-8
Yuting Zhong, Zesheng Qin, Abdulmajeed Alqhatani, Ahmed Sayed M. Metwally, Ashit Kumar Dutta, Joel J. P. C. Rodrigues

Smart cities and urbanization use enormous IoT devices to transfer data for analysis and information processing. These IoT can relate to billions of devices and transfer essential data from their surroundings. There is a massive need for energy because of the tremendous data exchange between billions of gadgets. Green IoT aims to make the environment a better place while lowering the power usage of IoT devices. In this work, a hybrid deep learning method called "Green energy-efficient routing (GEER) with long short-term memory deep Q-Network is used to minimize the energy consumption of devices. Initially, a GEER with Ant Colony Optimization (ACO) and AutoEncoder (AE) provides efficient routing between devices in the network. Next, the long short-term memory deep Q-Network based Reinforcement Learning (RL) method reduces the energy consumption of IoT devices. This hybrid approach leverages the strengths of each technique to address different aspects of energy-efficient routing. ACO and AE contribute to efficient routing decisions, while LSTM DQN optimizes energy consumption, resulting in a well-rounded solution. Finally, the proposed GELSDQN-ACO method is compared with previous methods such as RNN-LSTM, DPC-DBN, and LSTM-DQN. Moreover, we critically analyze the green IoT and perform implementation and evaluation.

智慧城市和城市化使用大量物联网设备来传输数据以进行分析和信息处理。这些物联网可以与数十亿台设备相关,并从周围环境传输重要数据。由于数十亿设备之间的巨大数据交换,对能源的需求非常大。绿色物联网旨在使环境变得更美好,同时降低物联网设备的功耗。在这项工作中,使用了一种名为“绿色节能路由(GEER)与长短期记忆深度Q-Network的混合深度学习方法来最小化设备的能量消耗。首先,采用蚁群优化(ACO)和自动编码器(AE)的GEER提供了网络中设备之间的有效路由。其次,基于长短期记忆深度Q-Network的强化学习(RL)方法降低了物联网设备的能耗。这种混合方法利用每种技术的优势来解决节能路由的不同方面。ACO和AE有助于有效的路由决策,而LSTM DQN优化了能耗,从而产生了一个全面的解决方案。最后,将GELSDQN-ACO方法与RNN-LSTM、DPC-DBN、LSTM-DQN等方法进行了比较。此外,我们批判性地分析绿色物联网并进行实施和评估。
{"title":"Sustainable Environmental Design Using Green IOT with Hybrid Deep Learning and Building Algorithm for Smart City","authors":"Yuting Zhong, Zesheng Qin, Abdulmajeed Alqhatani, Ahmed Sayed M. Metwally, Ashit Kumar Dutta, Joel J. P. C. Rodrigues","doi":"10.1007/s10723-023-09704-8","DOIUrl":"https://doi.org/10.1007/s10723-023-09704-8","url":null,"abstract":"<p>Smart cities and urbanization use enormous IoT devices to transfer data for analysis and information processing. These IoT can relate to billions of devices and transfer essential data from their surroundings. There is a massive need for energy because of the tremendous data exchange between billions of gadgets. Green IoT aims to make the environment a better place while lowering the power usage of IoT devices. In this work, a hybrid deep learning method called \"Green energy-efficient routing (GEER) with long short-term memory deep Q-Network is used to minimize the energy consumption of devices. Initially, a GEER with Ant Colony Optimization (ACO) and AutoEncoder (AE) provides efficient routing between devices in the network. Next, the long short-term memory deep Q-Network based Reinforcement Learning (RL) method reduces the energy consumption of IoT devices. This hybrid approach leverages the strengths of each technique to address different aspects of energy-efficient routing. ACO and AE contribute to efficient routing decisions, while LSTM DQN optimizes energy consumption, resulting in a well-rounded solution. Finally, the proposed GELSDQN-ACO method is compared with previous methods such as RNN-LSTM, DPC-DBN, and LSTM-DQN. Moreover, we critically analyze the green IoT and perform implementation and evaluation.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Auto-Scaling Approach for Microservices in Cloud Computing Environments 云计算环境中微服务的自动伸缩方法
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-27 DOI: 10.1007/s10723-023-09713-7
Matineh ZargarAzad, Mehrdad Ashtiani

Recently, microservices have become a commonly-used architectural pattern for building cloud-native applications. Cloud computing provides flexibility for service providers, allowing them to remove or add resources depending on the workload of their web applications. If the resources allocated to the service are not aligned with its requirements, instances of failure or delayed response will increase, resulting in customer dissatisfaction. This problem has become a significant challenge in microservices-based applications, because thousands of microservices in the system may have complex interactions. Auto-scaling is a feature of cloud computing that enables resource scalability on demand, thus allowing service providers to deliver resources to their applications without human intervention under a dynamic workload to minimize resource cost and latency while maintaining the quality of service requirements. In this research, we aimed to establish a computational model for analyzing the workload of all microservices. To this end, the overall workload entering the system was considered, and the relationships and function calls between microservices were taken into account, because in a large-scale application with thousands of microservices, accurately monitoring all microservices and gathering precise performance metrics are usually difficult. Then, we developed a multi-criteria decision-making method to select the candidate microservices for scaling. We have tested the proposed approach with three datasets. The results of the conducted experiments show that the detection of input load toward microservices is performed with an average accuracy of about 99% which is a notable result. Furthermore, the proposed approach has demonstrated a substantial enhancement in resource utilization, achieving an average improvement of 40.74%, 20.28%, and 28.85% across three distinct datasets in comparison to existing methods. This is achieved by a notable reduction in the number of scaling operations, reducing the count by 54.40%, 55.52%, and 69.82%, respectively. Consequently, this optimization translates into a decrease in required resources, leading to cost reductions of 1.64%, 1.89%, and 1.67% respectively.

最近,微服务已经成为构建云原生应用程序的常用架构模式。云计算为服务提供商提供了灵活性,允许他们根据web应用程序的工作负载删除或添加资源。如果分配给服务的资源与其需求不一致,则故障或延迟响应的实例将增加,从而导致客户不满。这个问题已经成为基于微服务的应用程序中的一个重大挑战,因为系统中的数千个微服务可能具有复杂的交互。自动扩展是云计算的一项特性,它支持按需扩展资源,从而允许服务提供商在动态工作负载下无需人工干预即可将资源交付给其应用程序,从而在保持服务质量需求的同时最小化资源成本和延迟。在本研究中,我们旨在建立一个计算模型来分析所有微服务的工作负载。为此,考虑了进入系统的总体工作负载,并考虑了微服务之间的关系和功能调用,因为在具有数千个微服务的大规模应用程序中,准确监控所有微服务并收集精确的性能指标通常是困难的。然后,我们开发了一种多准则决策方法来选择候选微服务进行扩展。我们已经用三个数据集测试了所提出的方法。实验结果表明,对微服务输入负载的检测平均准确率约为99%,这是一个显著的结果。此外,与现有方法相比,所提出的方法在三个不同的数据集上实现了40.74%,20.28%和28.85%的平均改进,大大提高了资源利用率。这是通过显著减少缩放操作的数量来实现的,分别减少了54.40%、55.52%和69.82%的计数。因此,这种优化转化为所需资源的减少,导致成本分别降低1.64%,1.89%和1.67%。
{"title":"An Auto-Scaling Approach for Microservices in Cloud Computing Environments","authors":"Matineh ZargarAzad, Mehrdad Ashtiani","doi":"10.1007/s10723-023-09713-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09713-7","url":null,"abstract":"<p>Recently, microservices have become a commonly-used architectural pattern for building cloud-native applications. Cloud computing provides flexibility for service providers, allowing them to remove or add resources depending on the workload of their web applications. If the resources allocated to the service are not aligned with its requirements, instances of failure or delayed response will increase, resulting in customer dissatisfaction. This problem has become a significant challenge in microservices-based applications, because thousands of microservices in the system may have complex interactions. Auto-scaling is a feature of cloud computing that enables resource scalability on demand, thus allowing service providers to deliver resources to their applications without human intervention under a dynamic workload to minimize resource cost and latency while maintaining the quality of service requirements. In this research, we aimed to establish a computational model for analyzing the workload of all microservices. To this end, the overall workload entering the system was considered, and the relationships and function calls between microservices were taken into account, because in a large-scale application with thousands of microservices, accurately monitoring all microservices and gathering precise performance metrics are usually difficult. Then, we developed a multi-criteria decision-making method to select the candidate microservices for scaling. We have tested the proposed approach with three datasets. The results of the conducted experiments show that the detection of input load toward microservices is performed with an average accuracy of about 99% which is a notable result. Furthermore, the proposed approach has demonstrated a substantial enhancement in resource utilization, achieving an average improvement of 40.74%, 20.28%, and 28.85% across three distinct datasets in comparison to existing methods. This is achieved by a notable reduction in the number of scaling operations, reducing the count by 54.40%, 55.52%, and 69.82%, respectively. Consequently, this optimization translates into a decrease in required resources, leading to cost reductions of 1.64%, 1.89%, and 1.67% respectively.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI and Blockchain Assisted Framework for Offloading and Resource Allocation in Fog Computing 人工智能和区块链辅助的雾计算卸载和资源分配框架
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-27 DOI: 10.1007/s10723-023-09694-7
Mohammad Aknan, Maheshwari Prasad Singh, Rajeev Arya

The role of Internet of Things (IoT) applications has increased tremendously in several areas like healthcare, agriculture, academia, industries, transportation, smart cities, etc. to make human life better. The number of IoT devices is increasing exponentially, and generating huge amounts of data that IoT nodes cannot handle. The centralized cloud architecture can process this enormous IoT data but fails to offer quality of service (QoS) due to high transmission latency, network congestion, and bandwidth. The fog paradigm has evolved that bring computing resources at the network edge for offering services to latency-sensitive IoT applications. Still, offloading decision, heterogeneous fog network, diverse workload, security issues, energy consumption, and expected QoS is significant challenges in this area. Hence, we have proposed a Blockchain-enabled Intelligent framework to tackle the mentioned issues and allocate the optimal resources for upcoming IoT requests in a collaborative cloud fog environment. The proposed framework is integrated with an Artificial Intelligence (AI) based meta-heuristic algorithm that has a high convergence rate, and the capability to take the offloading decision at run time, leading to improved results quality. Blockchain technology secures IoT applications and their data from modern attacks. The experimental results of the proposed framework exhibit significant improvement by up to 20% in execution time and cost and up to 18% in energy consumption over other meta-heuristic approaches under similar experimental environments.

物联网(IoT)应用在医疗保健、农业、学术界、工业、交通、智慧城市等多个领域的作用大大增加,使人类的生活更美好。物联网设备的数量呈指数级增长,并产生物联网节点无法处理的大量数据。集中式云架构可以处理这些庞大的物联网数据,但由于传输延迟高、网络拥塞和带宽,无法提供服务质量(QoS)。雾模式已经发展,将计算资源带到网络边缘,为延迟敏感的物联网应用程序提供服务。然而,卸载决策、异构雾网络、不同的工作负载、安全问题、能源消耗和预期的QoS是该领域的重大挑战。因此,我们提出了一个支持区块链的智能框架来解决上述问题,并在协作云雾环境中为即将到来的物联网请求分配最佳资源。该框架集成了基于人工智能(AI)的元启发式算法,该算法具有高收敛率,并且能够在运行时做出卸载决策,从而提高了结果质量。区块链技术可以保护物联网应用程序及其数据免受现代攻击。实验结果表明,在类似的实验环境下,与其他元启发式方法相比,所提出的框架在执行时间和成本上显著提高了20%,能耗提高了18%。
{"title":"AI and Blockchain Assisted Framework for Offloading and Resource Allocation in Fog Computing","authors":"Mohammad Aknan, Maheshwari Prasad Singh, Rajeev Arya","doi":"10.1007/s10723-023-09694-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09694-7","url":null,"abstract":"<p>The role of Internet of Things (IoT) applications has increased tremendously in several areas like healthcare, agriculture, academia, industries, transportation, smart cities, etc. to make human life better. The number of IoT devices is increasing exponentially, and generating huge amounts of data that IoT nodes cannot handle. The centralized cloud architecture can process this enormous IoT data but fails to offer quality of service (QoS) due to high transmission latency, network congestion, and bandwidth. The fog paradigm has evolved that bring computing resources at the network edge for offering services to latency-sensitive IoT applications. Still, offloading decision, heterogeneous fog network, diverse workload, security issues, energy consumption, and expected QoS is significant challenges in this area. Hence, we have proposed a Blockchain-enabled Intelligent framework to tackle the mentioned issues and allocate the optimal resources for upcoming IoT requests in a collaborative cloud fog environment. The proposed framework is integrated with an Artificial Intelligence (AI) based meta-heuristic algorithm that has a high convergence rate, and the capability to take the offloading decision at run time, leading to improved results quality. Blockchain technology secures IoT applications and their data from modern attacks. The experimental results of the proposed framework exhibit significant improvement by up to 20% in execution time and cost and up to 18% in energy consumption over other meta-heuristic approaches under similar experimental environments.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secured SDN Based Task Scheduling in Edge Computing for Smart City Health Monitoring Operation Management System 基于安全SDN的边缘计算智能城市健康监测运行管理系统任务调度
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-22 DOI: 10.1007/s10723-023-09707-5
Shuangshuang Zhang, Yue Tang, Dinghui Wang, Noorliza Karia, Chenguang Wang

Health monitoring systems (HMS) with wearable IoT devices are constantly being developed and improved. But most of these gadgets have limited energy and processing power due to resource constraints. Mobile edge computing (MEC) must be used to analyze the HMS information to decrease bandwidth usage and increase reaction times for applications that depend on latency and require intense computation. To achieve these needs while considering emergencies under HMS, this work offers an effective task planning and allocation of resources mechanism in MEC. Utilizing the Software Denied Network (SDN) framework; we provide a priority-aware semi-greedy with genetic algorithm (PSG-GA) method. It prioritizes tasks differently by considering their emergencies, calculated concerning the data collected from a patient’s smart wearable devices. The process can determine whether a job must be completed domestically at the hospital workstations (HW) or in the cloud. The goal is to minimize both the bandwidth cost and the overall task processing time. Existing techniques were compared to the proposed SD-PSGA regarding average latency, job scheduling effectiveness, execution duration, bandwidth consumption, CPU utilization, and power usage. The testing results are encouraging since SD-PSGA can handle emergencies and fulfill the task’s latency-sensitive requirements at a lower bandwidth cost. The accuracy of testing model achieves 97 to 98% for nearly 200 tasks.

具有可穿戴物联网设备的健康监测系统(HMS)正在不断开发和改进。但由于资源的限制,大多数这些小工具的能量和处理能力有限。必须使用移动边缘计算(MEC)来分析HMS信息,以减少带宽使用,并增加依赖于延迟和需要大量计算的应用程序的反应时间。为了满足这些需求,同时考虑突发事件,本研究为MEC提供了一种有效的任务规划和资源分配机制。利用软件拒绝网络(SDN)框架提出了一种基于遗传算法(PSG-GA)的优先级感知半贪婪算法。它根据从患者的智能可穿戴设备收集的数据,根据紧急情况计算出不同的任务优先级。该流程可以确定工作是必须在医院工作站(HW)内部完成还是必须在云中完成。目标是最小化带宽成本和总体任务处理时间。将现有技术与建议的SD-PSGA在平均延迟、作业调度有效性、执行持续时间、带宽消耗、CPU利用率和功耗方面进行了比较。测试结果令人鼓舞,因为SD-PSGA可以处理紧急情况,并以较低的带宽成本满足任务的延迟敏感需求。在近200个测试任务中,测试模型的准确率达到97 ~ 98%。
{"title":"Secured SDN Based Task Scheduling in Edge Computing for Smart City Health Monitoring Operation Management System","authors":"Shuangshuang Zhang, Yue Tang, Dinghui Wang, Noorliza Karia, Chenguang Wang","doi":"10.1007/s10723-023-09707-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09707-5","url":null,"abstract":"<p>Health monitoring systems (HMS) with wearable IoT devices are constantly being developed and improved. But most of these gadgets have limited energy and processing power due to resource constraints. Mobile edge computing (MEC) must be used to analyze the HMS information to decrease bandwidth usage and increase reaction times for applications that depend on latency and require intense computation. To achieve these needs while considering emergencies under HMS, this work offers an effective task planning and allocation of resources mechanism in MEC. Utilizing the Software Denied Network (SDN) framework; we provide a priority-aware semi-greedy with genetic algorithm (PSG-GA) method. It prioritizes tasks differently by considering their emergencies, calculated concerning the data collected from a patient’s smart wearable devices. The process can determine whether a job must be completed domestically at the hospital workstations (HW) or in the cloud. The goal is to minimize both the bandwidth cost and the overall task processing time. Existing techniques were compared to the proposed SD-PSGA regarding average latency, job scheduling effectiveness, execution duration, bandwidth consumption, CPU utilization, and power usage. The testing results are encouraging since SD-PSGA can handle emergencies and fulfill the task’s latency-sensitive requirements at a lower bandwidth cost. The accuracy of testing model achieves 97 to 98% for nearly 200 tasks.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Immune Whale Differential Evolution Optimization (HIWDEO) Based Computation Offloading in MEC for IoT 基于混合免疫鲸鱼差分进化优化(HIWDEO)的物联网MEC计算卸载
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-21 DOI: 10.1007/s10723-023-09705-7
Jizhou Li, Qi Wang, Shuai Hu, Ling Li

The adoption of User Equipment (UE) is on the rise, driven by advancements in Mobile Cloud Computing (MCC), Mobile Edge Computing (MEC), the Internet of Things (IoT), and Artificial Intelligence (AI). Among these, MEC stands out as a pivotal aspect of the 5G network. A critical challenge within the realm of MEC is task offloading. This involves optimizing conflicting factors like execution time, energy usage, and computation duration. Additionally, addressing the offloading of interdependent tasks poses another significant hurdle that requires attention. The developed models are single objective, task dependency, and computationally expensive. As a result, the Immune whale differential evolution optimization algorithm is proposed to offload the dependent tasks to the MEC with three objectives: minimizing the execution delay and reducing the energy and cost of MEC resources. The standard Whale optimization is incorporated with DE with customized mutation operations and immune system to enhance the searching strategy of Whale optimization. The proposed HIWDEO secured reduced energy and overhead of UE to execute its tasks. The comparison between the developed model and other optimization approaches shows the superiority of HIWDEO.

在移动云计算(MCC)、移动边缘计算(MEC)、物联网(IoT)和人工智能(AI)进步的推动下,用户设备(UE)的采用率正在上升。其中,MEC作为5G网络的关键方面脱颖而出。MEC领域的一个关键挑战是任务卸载。这涉及到优化冲突因素,如执行时间、能源使用和计算持续时间。此外,解决相互依赖任务的卸载问题是另一个需要注意的重大障碍。开发的模型目标单一,任务依赖,计算成本高。为此,提出了免疫鲸鱼差分进化优化算法,以最小化执行延迟和降低MEC资源的能量和成本为目标,将相关任务卸载给MEC。将标准的Whale优化与具有自定义突变操作和免疫系统的DE相结合,增强了Whale优化的搜索策略。提出的HIWDEO确保了UE执行任务所需的能源和开销的减少。将所建立的模型与其他优化方法进行了比较,表明了HIWDEO方法的优越性。
{"title":"Hybrid Immune Whale Differential Evolution Optimization (HIWDEO) Based Computation Offloading in MEC for IoT","authors":"Jizhou Li, Qi Wang, Shuai Hu, Ling Li","doi":"10.1007/s10723-023-09705-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09705-7","url":null,"abstract":"<p>The adoption of User Equipment (UE) is on the rise, driven by advancements in Mobile Cloud Computing (MCC), Mobile Edge Computing (MEC), the Internet of Things (IoT), and Artificial Intelligence (AI). Among these, MEC stands out as a pivotal aspect of the 5G network. A critical challenge within the realm of MEC is task offloading. This involves optimizing conflicting factors like execution time, energy usage, and computation duration. Additionally, addressing the offloading of interdependent tasks poses another significant hurdle that requires attention. The developed models are single objective, task dependency, and computationally expensive. As a result, the Immune whale differential evolution optimization algorithm is proposed to offload the dependent tasks to the MEC with three objectives: minimizing the execution delay and reducing the energy and cost of MEC resources. The standard Whale optimization is incorporated with DE with customized mutation operations and immune system to enhance the searching strategy of Whale optimization. The proposed HIWDEO secured reduced energy and overhead of UE to execute its tasks. The comparison between the developed model and other optimization approaches shows the superiority of HIWDEO.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preservation of Sensitive Data Using Multi-Level Blockchain-based Secured Framework for Edge Network Devices 使用基于多级区块链的边缘网络设备安全框架保存敏感数据
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-17 DOI: 10.1007/s10723-023-09699-2
Charu Awasthi, Prashant Kumar Mishra, Pawan Kumar Pal, Surbhi Bhatia Khan, Ambuj Kumar Agarwal, Thippa Reddy Gadekallu, Areej A. Malibari

The proliferation of IoT devices has influenced end users in several aspects. Yottabytes (YB) of information are being produced in the IoT environs because of the ever-increasing utilization capacity of the Internet. Since sensitive information, as well as privacy problems, always seem to be an unsolved problem, even with best-in-class in-formation governance standards, it is difficult to bolster defensive security capabilities. Secure data sharing across disparate systems is made possible by blockchain technology, which operates on a decentralized computing paradigm. In the ever-changing IoT environments, blockchain technology provides irreversibility (immutability) usage across a wide range of services and use cases. Therefore, blockchain technology can be leveraged to securely hold private information, even in the dynamicity context of the IoT. However, as the rate of change in IoT networks accelerates, every potential weak point in the system is exposed, making it more challenging to keep sensitive data se-cure. In this study, we adopted a Multi-level Blockchain-based Secured Framework (M-BSF) to provide multi-level protection for sensitive data in the face of threats to IoT-based networking systems. The envisioned M-BSF framework incorporates edge-level, fog-level, and cloud-level security. At edge- and fog-level security, baby kyber and scaling kyber cryptosystems are applied to ensure data preservation. Kyber is a cryptosystem scheme that adopts public-key encryption and private-key decryption processes. Each block of the blockchain uses the cloud-based Argon-2di hashing method for cloud-level data storage, providing the highest level of confidentiality. Argon-2di is a stable hashing algorithm that uses a hybrid approach to access the memory that relied on dependent and independent memory features. Based on the attack-resistant rate (> 96%), computational cost (in time), and other main metrics, the proposed M-BSF security architecture appears to be an acceptable alternative to the current methodologies.

物联网设备的激增在几个方面影响了最终用户。由于互联网的利用能力不断增加,物联网环境中产生的信息数量达到千兆字节(YB)。由于敏感信息和隐私问题似乎始终是一个未解决的问题,即使使用一流的信息治理标准,也很难加强防御安全功能。通过区块链技术,跨不同系统的安全数据共享成为可能,区块链技术在分散的计算范式上运行。在不断变化的物联网环境中,区块链技术在广泛的服务和用例中提供了不可逆性(不变性)使用。因此,即使在物联网的动态环境中,也可以利用区块链技术安全地保存私人信息。然而,随着物联网网络变化速度的加快,系统中的每个潜在弱点都暴露出来,这使得保持敏感数据安全变得更具挑战性。在本研究中,我们采用了基于多级区块链的安全框架(M-BSF),在面对基于物联网的网络系统的威胁时,为敏感数据提供多级保护。设想的M-BSF框架包含边缘级、雾级和云级安全性。在边缘和雾级安全,婴儿kyber和缩放kyber密码系统应用,以确保数据保存。Kyber是一种采用公钥加密和私钥解密过程的密码系统方案。区块链的每个区块都使用基于云的Argon-2di哈希方法进行云级数据存储,提供最高级别的机密性。Argon-2di是一种稳定的散列算法,它使用混合方法访问依赖于依赖和独立内存特性的内存。基于抗攻击率(96%)、计算成本(时间)和其他主要指标,提议的M-BSF安全体系结构似乎是当前方法的可接受替代方案。
{"title":"Preservation of Sensitive Data Using Multi-Level Blockchain-based Secured Framework for Edge Network Devices","authors":"Charu Awasthi, Prashant Kumar Mishra, Pawan Kumar Pal, Surbhi Bhatia Khan, Ambuj Kumar Agarwal, Thippa Reddy Gadekallu, Areej A. Malibari","doi":"10.1007/s10723-023-09699-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09699-2","url":null,"abstract":"<p>The proliferation of IoT devices has influenced end users in several aspects. Yottabytes (YB) of information are being produced in the IoT environs because of the ever-increasing utilization capacity of the Internet. Since sensitive information, as well as privacy problems, always seem to be an unsolved problem, even with best-in-class in-formation governance standards, it is difficult to bolster defensive security capabilities. Secure data sharing across disparate systems is made possible by blockchain technology, which operates on a decentralized computing paradigm. In the ever-changing IoT environments, blockchain technology provides irreversibility (immutability) usage across a wide range of services and use cases. Therefore, blockchain technology can be leveraged to securely hold private information, even in the dynamicity context of the IoT. However, as the rate of change in IoT networks accelerates, every potential weak point in the system is exposed, making it more challenging to keep sensitive data se-cure. In this study, we adopted a Multi-level Blockchain-based Secured Framework (M-BSF) to provide multi-level protection for sensitive data in the face of threats to IoT-based networking systems. The envisioned M-BSF framework incorporates edge-level, fog-level, and cloud-level security. At edge- and fog-level security, baby kyber and scaling kyber cryptosystems are applied to ensure data preservation. Kyber is a cryptosystem scheme that adopts public-key encryption and private-key decryption processes. Each block of the blockchain uses the cloud-based Argon-2di hashing method for cloud-level data storage, providing the highest level of confidentiality. Argon-2di is a stable hashing algorithm that uses a hybrid approach to access the memory that relied on dependent and independent memory features. Based on the attack-resistant rate (&gt; 96%), computational cost (in time), and other main metrics, the proposed M-BSF security architecture appears to be an acceptable alternative to the current methodologies.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Approach to Cloud Resource Management: Hybrid Machine Learning and Task Scheduling 云资源管理的新方法:混合机器学习和任务调度
2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-13 DOI: 10.1007/s10723-023-09702-w
Hong Zhou
{"title":"A Novel Approach to Cloud Resource Management: Hybrid Machine Learning and Task Scheduling","authors":"Hong Zhou","doi":"10.1007/s10723-023-09702-w","DOIUrl":"https://doi.org/10.1007/s10723-023-09702-w","url":null,"abstract":"","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136346546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1