首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
On-Chain and Off-Chain Data Management for Blockchain-Internet of Things: A Multi-Agent Deep Reinforcement Learning Approach 区块链-物联网的链上和链下数据管理:多代理深度强化学习方法
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-20 DOI: 10.1007/s10723-023-09739-x
Y. P. Tsang, C. K. M. Lee, Kening Zhang, C. H. Wu, W. H. Ip

The emergence of blockchain technology has seen applications increasingly hybridise cloud storage and distributed ledger technology in the Internet of Things (IoT) and cyber-physical systems, complicating data management in decentralised applications (DApps). Because it is inefficient for blockchain technology to handle large amounts of data, effective on-chain and off-chain data management in peer-to-peer networks and cloud storage has drawn considerable attention. Space reservation is a cost-effective approach to managing cloud storage effectively, contrasting with the demand for additional space in real-time. Furthermore, off-chain data replication in the peer-to-peer network can eliminate single points of failure of DApps. However, recent research has rarely discussed optimising on-chain and off-chain data management in the blockchain-enabled IoT (BIoT) environment. In this study, the BIoT environment is modelled, with cloud storage and blockchain orchestrated over the peer-to-peer network. The asynchronous advantage actor-critic algorithm is applied to exploit intelligent agents with the optimal policy for data packing, space reservation, and data replication to achieve an intelligent data management strategy. The experimental analysis reveals that the proposed scheme demonstrates rapid convergence and superior performance in terms of average total reward compared with other typical schemes, resulting in enhanced scalability, security and reliability of blockchain-IoT networks, leading to an intelligent data management strategy.

随着区块链技术的出现,在物联网(IoT)和网络物理系统中,云存储和分布式账本技术的混合应用越来越多,这使得去中心化应用(DApps)中的数据管理变得更加复杂。由于区块链技术处理大量数据的效率较低,因此在点对点网络和云存储中进行有效的链上和链下数据管理备受关注。空间预留是有效管理云存储的一种经济有效的方法,与对额外空间的实时需求形成鲜明对比。此外,点对点网络中的链外数据复制可以消除 DApp 的单点故障。然而,最近的研究很少讨论在区块链支持的物联网(BIoT)环境中优化链上和链下数据管理的问题。本研究以 BIoT 环境为模型,通过点对点网络协调云存储和区块链。应用异步优势行动者批判算法,利用具有数据打包、空间预留和数据复制最优策略的智能代理,实现智能数据管理策略。实验分析表明,与其他典型方案相比,所提出的方案收敛速度快,平均总奖励性能优越,从而提高了区块链物联网网络的可扩展性、安全性和可靠性,实现了智能数据管理策略。
{"title":"On-Chain and Off-Chain Data Management for Blockchain-Internet of Things: A Multi-Agent Deep Reinforcement Learning Approach","authors":"Y. P. Tsang, C. K. M. Lee, Kening Zhang, C. H. Wu, W. H. Ip","doi":"10.1007/s10723-023-09739-x","DOIUrl":"https://doi.org/10.1007/s10723-023-09739-x","url":null,"abstract":"<p>The emergence of blockchain technology has seen applications increasingly hybridise cloud storage and distributed ledger technology in the Internet of Things (IoT) and cyber-physical systems, complicating data management in decentralised applications (DApps). Because it is inefficient for blockchain technology to handle large amounts of data, effective on-chain and off-chain data management in peer-to-peer networks and cloud storage has drawn considerable attention. Space reservation is a cost-effective approach to managing cloud storage effectively, contrasting with the demand for additional space in real-time. Furthermore, off-chain data replication in the peer-to-peer network can eliminate single points of failure of DApps. However, recent research has rarely discussed optimising on-chain and off-chain data management in the blockchain-enabled IoT (BIoT) environment. In this study, the BIoT environment is modelled, with cloud storage and blockchain orchestrated over the peer-to-peer network. The asynchronous advantage actor-critic algorithm is applied to exploit intelligent agents with the optimal policy for data packing, space reservation, and data replication to achieve an intelligent data management strategy. The experimental analysis reveals that the proposed scheme demonstrates rapid convergence and superior performance in terms of average total reward compared with other typical schemes, resulting in enhanced scalability, security and reliability of blockchain-IoT networks, leading to an intelligent data management strategy.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Lidar Target Detection Method at the Edge for the Cloud Continuum 云连续边缘三维激光雷达目标探测方法
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-19 DOI: 10.1007/s10723-023-09736-0
Xuemei Li, Xuelian Liu, Da Xie, Chong Chen

In the internet of things, machine learning at the edge of cloud continuum is developing rapidly, providing more convenient services for design developers. The paper proposes a lidar target detection method based on scene density-awareness network for cloud continuum. The density-awareness network architecture is designed, and the context column feature network is proposed. The BEV density attention feature network is designed by cascading the density feature map with the spatial attention mechanism, and then connected with the BEV column feature network to generate the ablation BEV map. Multi-head detector is designed to regress the object center point, scale size and direction, and loss function is used for active supervision. The experiment is conducted on Alibaba Cloud services. On the validation dataset of KITTI, the 3D objects and BEV objects are detected and evaluated for three types of objects. The results show that most of the AP values of the density-awareness model proposed in this paper are higher than other methods, and the detection time is 0.09 s, which can meet the requirements of high accuracy and real-time of vehicle-borne lidar target detection.

在物联网领域,云连续体边缘的机器学习发展迅速,为设计开发人员提供了更便捷的服务。本文提出了一种基于云连续体场景密度感知网络的激光雷达目标检测方法。设计了密度感知网络架构,并提出了上下文列特征网络。通过空间注意机制级联密度特征图,设计了 BEV 密度注意特征网络,然后与 BEV 列特征网络连接,生成消融 BEV 图。设计了多头检测器对物体中心点、尺度大小和方向进行回归,并使用损失函数进行主动监督。实验在阿里巴巴云服务上进行。在 KITTI 的验证数据集上,对三种类型的三维物体和 BEV 物体进行了检测和评估。结果表明,本文提出的密度感知模型的AP值大多高于其他方法,检测时间为0.09 s,能够满足车载激光雷达目标检测的高精度和实时性要求。
{"title":"3D Lidar Target Detection Method at the Edge for the Cloud Continuum","authors":"Xuemei Li, Xuelian Liu, Da Xie, Chong Chen","doi":"10.1007/s10723-023-09736-0","DOIUrl":"https://doi.org/10.1007/s10723-023-09736-0","url":null,"abstract":"<p>In the internet of things, machine learning at the edge of cloud continuum is developing rapidly, providing more convenient services for design developers. The paper proposes a lidar target detection method based on scene density-awareness network for cloud continuum. The density-awareness network architecture is designed, and the context column feature network is proposed. The BEV density attention feature network is designed by cascading the density feature map with the spatial attention mechanism, and then connected with the BEV column feature network to generate the ablation BEV map. Multi-head detector is designed to regress the object center point, scale size and direction, and loss function is used for active supervision. The experiment is conducted on Alibaba Cloud services. On the validation dataset of KITTI, the 3D objects and BEV objects are detected and evaluated for three types of objects. The results show that most of the AP values of the density-awareness model proposed in this paper are higher than other methods, and the detection time is 0.09 s, which can meet the requirements of high accuracy and real-time of vehicle-borne lidar target detection.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Driven Task Scheduling Strategy with Blockchain Integration for Edge Computing 为边缘计算整合区块链的人工智能驱动任务调度策略
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-19 DOI: 10.1007/s10723-024-09743-9
Avishek Sinha, Samayveer Singh, Harsh K. Verma

In recent times, edge computing has arisen as a highly promising paradigm aimed at facilitating resource-intensive Internet of Things (IoT) applications by offering low-latency services. However, the constrained computational capabilities of the IoT nodes present considerable obstacles when it comes to efficient task-scheduling applications. In this paper, a nature-inspired coati optimization-based energy-aware task scheduling (CO-ETS) approach is proposed to address the challenge of efficiently assigning tasks to available edge devices. The proposed work incorporates a fitness function that effectively enhances task assignment optimization, leading to improved system efficiency, reduced power consumption, and enhanced system reliability. Moreover, we integrate blockchain with AI-driven task scheduling to fortify security, protect user privacy, and optimize edge computing in IoT-based environments. The blockchain-based approach ensures a secure and trusted decentralized identity management and reputation system for IoT edge networks. To validate the effectiveness of the proposed CO-ETS approach, we conduct a comparative analysis against state-of-the-art methods by considering metrics such as makespan, CPU execution time, energy consumption, and mean wait time. The proposed approach offers promising solutions to optimize task allocation, enhance system performance, and ensure secure and privacy-preserving operations in edge computing environments.

近来,边缘计算已成为一种极具前景的模式,旨在通过提供低延迟服务促进资源密集型物联网(IoT)应用。然而,物联网节点受限的计算能力给高效任务调度应用带来了相当大的障碍。本文提出了一种受自然启发的基于协同优化的能量感知任务调度(CO-ETS)方法,以应对将任务高效分配给可用边缘设备的挑战。所提出的工作结合了一个拟合函数,可有效增强任务分配优化,从而提高系统效率、降低功耗并增强系统可靠性。此外,我们还将区块链与人工智能驱动的任务调度相结合,在基于物联网的环境中加强安全性、保护用户隐私并优化边缘计算。基于区块链的方法可确保为物联网边缘网络提供安全可信的去中心化身份管理和信誉系统。为了验证所提出的 CO-ETS 方法的有效性,我们通过考虑时间跨度、CPU 执行时间、能耗和平均等待时间等指标,与最先进的方法进行了比较分析。所提出的方法为优化任务分配、提高系统性能以及确保边缘计算环境中的安全和隐私保护操作提供了有前途的解决方案。
{"title":"AI-Driven Task Scheduling Strategy with Blockchain Integration for Edge Computing","authors":"Avishek Sinha, Samayveer Singh, Harsh K. Verma","doi":"10.1007/s10723-024-09743-9","DOIUrl":"https://doi.org/10.1007/s10723-024-09743-9","url":null,"abstract":"<p>In recent times, edge computing has arisen as a highly promising paradigm aimed at facilitating resource-intensive Internet of Things (IoT) applications by offering low-latency services. However, the constrained computational capabilities of the IoT nodes present considerable obstacles when it comes to efficient task-scheduling applications. In this paper, a nature-inspired coati optimization-based energy-aware task scheduling (CO-ETS) approach is proposed to address the challenge of efficiently assigning tasks to available edge devices. The proposed work incorporates a fitness function that effectively enhances task assignment optimization, leading to improved system efficiency, reduced power consumption, and enhanced system reliability. Moreover, we integrate blockchain with AI-driven task scheduling to fortify security, protect user privacy, and optimize edge computing in IoT-based environments. The blockchain-based approach ensures a secure and trusted decentralized identity management and reputation system for IoT edge networks. To validate the effectiveness of the proposed CO-ETS approach, we conduct a comparative analysis against state-of-the-art methods by considering metrics such as makespan, CPU execution time, energy consumption, and mean wait time. The proposed approach offers promising solutions to optimize task allocation, enhance system performance, and ensure secure and privacy-preserving operations in edge computing environments.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Accounting Informatization through Simultaneous Multi-Tasking across Edge and Cloud Devices using Hybrid Machine Learning Models 利用混合机器学习模型在边缘设备和云设备之间同时执行多任务,优化会计信息化
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-18 DOI: 10.1007/s10723-023-09735-1
Xiaofeng Yang

Accounting informatization is a crucial component of enterprise informatization, significantly impacting operational efficiency in accounting and finance. Advances in information technology have introduced automation techniques that accelerate the processing of accounting information cost-effectively. Integrating artificial intelligence, cloud computing, and edge computing is pivotal in streamlining and optimizing these processes. Traditionally, accounting informatization relied on system servers and local storage for data processing. However, the era of big data necessitates a shift to cloud computing frameworks for efficient data storage and processing. Despite the advantages of cloud storage, concerns arise regarding data security and the substantial data transactions between the cloud and source devices. To address these challenges, this research proposes a novel algorithm, Heterogeneous Distributed Deep Learning with Data Offloading (DDLO) algorithm. DDLO leverages the synergy between edge devices and cloud computing to enhance data processes. Edge computing enables rapid processing of large volumes of data at or near the data collection sites, optimizing day-to-day operations for enterprises. Furthermore, machine learning algorithms at edge devices enhance data processing efficiency, augmenting the computing environment's overall performance. The proposed DDLO algorithm fosters a hybrid machine learning approach for computing joint tasks and multi-tasking in accounting informatization. It enables dynamic resource allocation, allowing selected data or model updates to be offloaded to the cloud for complex tasks. The algorithm's performance is rigorously evaluated using key metrics, including computing time, offloading time, accuracy, and cost levels. By capitalizing on the strengths of edge computing, cloud computing, and artificial intelligence, the DDLO algorithm effectively addresses accounting informatization challenges. It empowers enterprises to process vast amounts of accounting data efficiently and securely while improving overall operational efficiency. Regarding time, using terasort in tasks offloading using DDLO consumes less milliseconds 0t 33 ms which is lesser than other techniques.

会计信息化是企业信息化的重要组成部分,对会计和财务工作的运作效率有重大影响。信息技术的进步引入了自动化技术,可以经济高效地加速会计信息处理。整合人工智能、云计算和边缘计算对简化和优化这些流程至关重要。传统上,会计信息化依赖系统服务器和本地存储进行数据处理。然而,大数据时代要求向云计算框架转变,以实现高效的数据存储和处理。尽管云存储具有诸多优势,但数据安全性以及云和源设备之间的大量数据交易问题仍令人担忧。为了应对这些挑战,本研究提出了一种新型算法--异构分布式深度学习与数据卸载(DDLO)算法。DDLO 利用边缘设备和云计算之间的协同作用来增强数据处理能力。边缘计算可在数据收集地点或附近快速处理大量数据,优化企业的日常运营。此外,边缘设备上的机器学习算法还能提高数据处理效率,增强计算环境的整体性能。所提出的 DDLO 算法是一种混合机器学习方法,用于计算会计信息化中的联合任务和多任务。它实现了动态资源分配,允许将选定的数据或模型更新卸载到云端,以完成复杂任务。该算法的性能通过计算时间、卸载时间、准确性和成本水平等关键指标进行了严格评估。通过充分利用边缘计算、云计算和人工智能的优势,DDLO 算法有效地解决了会计信息化的难题。它使企业能够高效、安全地处理海量会计数据,同时提高整体运营效率。在时间方面,在使用 DDLO 进行任务卸载时,使用 terasort 所消耗的时间比其他技术少 33 毫秒。
{"title":"Optimizing Accounting Informatization through Simultaneous Multi-Tasking across Edge and Cloud Devices using Hybrid Machine Learning Models","authors":"Xiaofeng Yang","doi":"10.1007/s10723-023-09735-1","DOIUrl":"https://doi.org/10.1007/s10723-023-09735-1","url":null,"abstract":"<p>Accounting informatization is a crucial component of enterprise informatization, significantly impacting operational efficiency in accounting and finance. Advances in information technology have introduced automation techniques that accelerate the processing of accounting information cost-effectively. Integrating artificial intelligence, cloud computing, and edge computing is pivotal in streamlining and optimizing these processes. Traditionally, accounting informatization relied on system servers and local storage for data processing. However, the era of big data necessitates a shift to cloud computing frameworks for efficient data storage and processing. Despite the advantages of cloud storage, concerns arise regarding data security and the substantial data transactions between the cloud and source devices. To address these challenges, this research proposes a novel algorithm, Heterogeneous Distributed Deep Learning with Data Offloading (DDLO) algorithm. DDLO leverages the synergy between edge devices and cloud computing to enhance data processes. Edge computing enables rapid processing of large volumes of data at or near the data collection sites, optimizing day-to-day operations for enterprises. Furthermore, machine learning algorithms at edge devices enhance data processing efficiency, augmenting the computing environment's overall performance. The proposed DDLO algorithm fosters a hybrid machine learning approach for computing joint tasks and multi-tasking in accounting informatization. It enables dynamic resource allocation, allowing selected data or model updates to be offloaded to the cloud for complex tasks. The algorithm's performance is rigorously evaluated using key metrics, including computing time, offloading time, accuracy, and cost levels. By capitalizing on the strengths of edge computing, cloud computing, and artificial intelligence, the DDLO algorithm effectively addresses accounting informatization challenges. It empowers enterprises to process vast amounts of accounting data efficiently and securely while improving overall operational efficiency. Regarding time, using terasort in tasks offloading using DDLO consumes less milliseconds 0t 33 ms which is lesser than other techniques.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139501489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMSV: a New Cloud Multi-Agents for Self-Driving Vehicles as a Services CMSV:为自动驾驶汽车提供服务的新型云多代理服务
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-13 DOI: 10.1007/s10723-023-09734-2
Aida A. Nasr
{"title":"CMSV: a New Cloud Multi-Agents for Self-Driving Vehicles as a Services","authors":"Aida A. Nasr","doi":"10.1007/s10723-023-09734-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09734-2","url":null,"abstract":"","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139437563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Fuzzy Neural Network for Joint Task Offloading in the Internet of Vehicles 用于车联网联合任务卸载的混合模糊神经网络
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-09 DOI: 10.1007/s10723-023-09724-4
Bingtao Liu

The Internet of Vehicles (IoV) technology is progressively maturing because of the growth of private cars and the establishment of intelligent transportation systems. The development of smart cars has, therefore, been followed by a parallel rise in the volume of media and video games in the automobile and a massive increase in the need for processing resources. Smart cars cannot process the enormous quantity of requests created by vehicles because they have limited computing power and must maintain many outstanding jobs in their queues. The distribution of edge servers near the customer side of the highway may also accomplish real-time resource requests, and edge servers can assist with the shortage of computational power. Nevertheless, the substantial amount of energy created while processing is also an issue we must address. A joint task offloading strategy based on mobile edge computing and fog computing (EFTO) was presented in this paper to address this problem. Practically, the position of the processing activity is first discovered by obtaining the computing task's route, which reveals all the task's routing details from the starting point to the desired place. Next, to minimize the time and time expended during offloading and processing, a multi-objective optimization problem is implemented using the task offloading technique F-TORA based on the Takagi–Sugeno fuzzy neural network (T-S FNN). Finally, comparative trials showing a decrease in time consumed and an optimization of energy use compared to alternative offloading techniques prove the effectiveness of EFTO.

随着私家车的发展和智能交通系统的建立,车联网(IoV)技术正逐步走向成熟。因此,在智能汽车发展的同时,汽车中的媒体和视频游戏数量也在增加,对处理资源的需求也大量增加。智能汽车无法处理汽车产生的大量请求,因为它们的计算能力有限,而且必须在队列中保留许多未完成的任务。分布在高速公路用户侧附近的边缘服务器也可以完成实时资源请求,边缘服务器可以帮助解决计算能力不足的问题。然而,处理过程中产生的大量能源也是我们必须解决的问题。本文提出了一种基于移动边缘计算和雾计算的联合任务卸载策略(EFTO)来解决这一问题。实际上,处理活动的位置首先是通过获取计算任务的路由来发现的,路由显示了任务从起点到所需地点的所有路由细节。接下来,为了最大限度地减少卸载和处理过程中所花费的时间,我们使用基于高木-菅野模糊神经网络(T-S FNN)的任务卸载技术 F-TORA 实现了一个多目标优化问题。最后,对比试验表明,与其他卸载技术相比,耗时减少,能耗优化,这证明了 EFTO 的有效性。
{"title":"Hybrid Fuzzy Neural Network for Joint Task Offloading in the Internet of Vehicles","authors":"Bingtao Liu","doi":"10.1007/s10723-023-09724-4","DOIUrl":"https://doi.org/10.1007/s10723-023-09724-4","url":null,"abstract":"<p>The Internet of Vehicles (IoV) technology is progressively maturing because of the growth of private cars and the establishment of intelligent transportation systems. The development of smart cars has, therefore, been followed by a parallel rise in the volume of media and video games in the automobile and a massive increase in the need for processing resources. Smart cars cannot process the enormous quantity of requests created by vehicles because they have limited computing power and must maintain many outstanding jobs in their queues. The distribution of edge servers near the customer side of the highway may also accomplish real-time resource requests, and edge servers can assist with the shortage of computational power. Nevertheless, the substantial amount of energy created while processing is also an issue we must address. A joint task offloading strategy based on mobile edge computing and fog computing (EFTO) was presented in this paper to address this problem. Practically, the position of the processing activity is first discovered by obtaining the computing task's route, which reveals all the task's routing details from the starting point to the desired place. Next, to minimize the time and time expended during offloading and processing, a multi-objective optimization problem is implemented using the task offloading technique F-TORA based on the Takagi–Sugeno fuzzy neural network (T-S FNN). Finally, comparative trials showing a decrease in time consumed and an optimization of energy use compared to alternative offloading techniques prove the effectiveness of EFTO.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parking Cooperation-Based Mobile Edge Computing Using Task Offloading Strategy 使用任务卸载策略的基于停车合作的移动边缘计算
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-08 DOI: 10.1007/s10723-023-09721-7
XuanWen, Hai Meng Sun

The surge in computing demands of onboard devices in vehicles has necessitated the adoption of mobile edge computing (MEC) to cater to their computational and storage needs. This paper presents a task offloading strategy for mobile edge computing based on collaborative roadside parking cooperation, leveraging idle computing resources in roadside vehicles. The proposed method establishes resource sharing and mutual utilization among roadside vehicles, roadside units (RSUs), and cloud servers, transforming the computing task offloading problem into a constrained optimization challenge. To address the complexity of this optimization problem, a novel Hybrid Algorithm based on the Hill-Climbing and Genetic Algorithm (HHGA) is proposed, combined with the powerful Simulated Annealing (SA) algorithm. The HHGA-SA Algorithm integrates the advantages of both HHGA and SA to efficiently explore the solution space and optimize task execution with reduced delay and energy consumption. The HHGA component of the algorithm utilizes the strengths of Genetic Algorithm and Hill-Climbing. The Genetic Algorithm enables global exploration, identifying potential optimal solutions, while Hill-Climbing refines the solutions locally to improve their quality. By harnessing the synergies between these techniques, the HHGA-SA Algorithm navigates the multi-constraint landscape effectively, producing robust and high-quality solutions for task offloading. To evaluate the efficacy of the proposed approach, extensive simulations are conducted in a realistic roadside parking cooperation-based Mobile Edge Computing scenario. Comparative analyses with standard Genetic Algorithms and Hill-Climbing demonstrate the superiority of the HHGA-SA Algorithm, showcasing substantial enhancements in task execution efficiency and energy utilization. The study highlights the potential of leveraging idle computing resources in roadside parking vehicles to enhance Mobile Edge Computing capabilities. The collaborative approach facilitated by the HHGA-SA Algorithm fosters efficient task offloading among roadside vehicles, RSUs, and cloud servers, elevating overall system performance.

随着车载设备计算需求的激增,有必要采用移动边缘计算(MEC)来满足其计算和存储需求。本文提出了一种基于路边停车协作的移动边缘计算任务卸载策略,充分利用路边车辆的闲置计算资源。所提出的方法在路边车辆、路边单元(RSU)和云服务器之间建立了资源共享和相互利用,将计算任务卸载问题转化为一个受限优化挑战。为解决该优化问题的复杂性,我们提出了一种基于爬山和遗传算法(HHGA)的新型混合算法,并将其与强大的模拟退火(SA)算法相结合。HHGA-SA 算法集成了 HHGA 和 SA 的优势,可有效探索解决方案空间,并在减少延迟和能耗的情况下优化任务执行。该算法的 HHGA 部分利用了遗传算法和爬山算法的优势。遗传算法可进行全局探索,找出潜在的最优解决方案,而爬山法则可对解决方案进行局部改进,以提高其质量。通过利用这些技术之间的协同作用,HHGA-SA 算法可以有效地在多约束条件环境中进行导航,为任务卸载提供稳健而高质量的解决方案。为了评估所提出方法的有效性,我们在一个基于路边停车合作的移动边缘计算场景中进行了大量模拟。与标准遗传算法和爬坡算法的对比分析表明了 HHGA-SA 算法的优越性,展示了任务执行效率和能源利用率的大幅提升。该研究强调了利用路边停车车辆的闲置计算资源来增强移动边缘计算能力的潜力。HHGA-SA 算法所采用的协作方法促进了路边车辆、RSU 和云服务器之间的高效任务卸载,从而提升了系统的整体性能。
{"title":"Parking Cooperation-Based Mobile Edge Computing Using Task Offloading Strategy","authors":"XuanWen, Hai Meng Sun","doi":"10.1007/s10723-023-09721-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09721-7","url":null,"abstract":"<p>The surge in computing demands of onboard devices in vehicles has necessitated the adoption of mobile edge computing (MEC) to cater to their computational and storage needs. This paper presents a task offloading strategy for mobile edge computing based on collaborative roadside parking cooperation, leveraging idle computing resources in roadside vehicles. The proposed method establishes resource sharing and mutual utilization among roadside vehicles, roadside units (RSUs), and cloud servers, transforming the computing task offloading problem into a constrained optimization challenge. To address the complexity of this optimization problem, a novel Hybrid Algorithm based on the Hill-Climbing and Genetic Algorithm (HHGA) is proposed, combined with the powerful Simulated Annealing (SA) algorithm. The HHGA-SA Algorithm integrates the advantages of both HHGA and SA to efficiently explore the solution space and optimize task execution with reduced delay and energy consumption. The HHGA component of the algorithm utilizes the strengths of Genetic Algorithm and Hill-Climbing. The Genetic Algorithm enables global exploration, identifying potential optimal solutions, while Hill-Climbing refines the solutions locally to improve their quality. By harnessing the synergies between these techniques, the HHGA-SA Algorithm navigates the multi-constraint landscape effectively, producing robust and high-quality solutions for task offloading. To evaluate the efficacy of the proposed approach, extensive simulations are conducted in a realistic roadside parking cooperation-based Mobile Edge Computing scenario. Comparative analyses with standard Genetic Algorithms and Hill-Climbing demonstrate the superiority of the HHGA-SA Algorithm, showcasing substantial enhancements in task execution efficiency and energy utilization. The study highlights the potential of leveraging idle computing resources in roadside parking vehicles to enhance Mobile Edge Computing capabilities. The collaborative approach facilitated by the HHGA-SA Algorithm fosters efficient task offloading among roadside vehicles, RSUs, and cloud servers, elevating overall system performance.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139398597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cloud-Edge-Based Multi-Objective Task Scheduling Approach for Smart Manufacturing Lines 基于云边缘的智能生产线多目标任务调度方法
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-08 DOI: 10.1007/s10723-023-09723-5
Huayi Yin, Xindong Huang, Erzhong Cao

The number of task demands created by smart terminals is rising dramatically because of the increasing usage of industrial Internet technologies in intelligent production lines. Speed of response is vital when dealing with such large activities. The current work needs to work with the task scheduling flow of smart manufacturing lines. The proposed method addresses the limitations of the current approach, particularly in the context of task scheduling and task scheduling flow within intelligent production lines. This study concentrates on solving the multi-objective task scheduling challenge in intelligent manufacturing by introducing a task scheduling approach based on job prioritization. To achieve this, a multi-objective task scheduling mechanism was developed, aiming to reduce service latency and energy consumption. This mechanism was integrated into a cloud-edge computing framework for intelligent production lines. The task scheduling strategy and task flow scheduling were optimized using Particle Swarm Optimization (PSO) and Gravitational Search Algorithm (GSA). Lastly, thorough simulation studies evaluate Multi-PSG, demonstrating that it beats every other algorithm regarding job completion rate. The completion rate of all tasks is greater than 90% when the number of nodes exceeds 10, which satisfies the real-time demands of the related tasks in the smart manufacturing processes. The method also performs better than other methods regarding power usage and maximum completion rate.

由于工业互联网技术在智能生产线中的应用日益广泛,智能终端所产生的任务需求数量也在急剧增加。在处理此类大型活动时,响应速度至关重要。目前的工作需要配合智能生产线的任务调度流程。所提出的方法解决了当前方法的局限性,特别是在智能生产线的任务调度和任务调度流程方面。本研究通过引入基于工作优先级的任务调度方法,集中解决智能制造中的多目标任务调度难题。为此,开发了一种多目标任务调度机制,旨在减少服务延迟和能耗。该机制被集成到智能生产线的云边计算框架中。使用粒子群优化(PSO)和重力搜索算法(GSA)对任务调度策略和任务流调度进行了优化。最后,全面的仿真研究对 Multi-PSG 进行了评估,结果表明它在任务完成率方面优于其他所有算法。当节点数超过 10 个时,所有任务的完成率均大于 90%,满足了智能制造流程中相关任务的实时需求。在功耗和最大完成率方面,该方法也优于其他方法。
{"title":"A Cloud-Edge-Based Multi-Objective Task Scheduling Approach for Smart Manufacturing Lines","authors":"Huayi Yin, Xindong Huang, Erzhong Cao","doi":"10.1007/s10723-023-09723-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09723-5","url":null,"abstract":"<p>The number of task demands created by smart terminals is rising dramatically because of the increasing usage of industrial Internet technologies in intelligent production lines. Speed of response is vital when dealing with such large activities. The current work needs to work with the task scheduling flow of smart manufacturing lines. The proposed method addresses the limitations of the current approach, particularly in the context of task scheduling and task scheduling flow within intelligent production lines. This study concentrates on solving the multi-objective task scheduling challenge in intelligent manufacturing by introducing a task scheduling approach based on job prioritization. To achieve this, a multi-objective task scheduling mechanism was developed, aiming to reduce service latency and energy consumption. This mechanism was integrated into a cloud-edge computing framework for intelligent production lines. The task scheduling strategy and task flow scheduling were optimized using Particle Swarm Optimization (PSO) and Gravitational Search Algorithm (GSA). Lastly, thorough simulation studies evaluate Multi-PSG, demonstrating that it beats every other algorithm regarding job completion rate. The completion rate of all tasks is greater than 90% when the number of nodes exceeds 10, which satisfies the real-time demands of the related tasks in the smart manufacturing processes. The method also performs better than other methods regarding power usage and maximum completion rate.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139398585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel Transformation Deep Learning Model for Electrocardiogram Classification and Arrhythmia Detection using Edge Computing 利用边缘计算进行心电图分类和心律失常检测的新型转换深度学习模型
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-30 DOI: 10.1007/s10723-023-09717-3
Yibo Han, Pu Han, Bo Yuan, Zheng Zhang, Lu Liu, John Panneerselvam

The diagnosis of the cardiovascular disease relies heavily on the automated classification of electrocardiograms (ECG) for arrhythmia monitoring, which is often performed using machine learning (ML) algorithms. However, current ML algorithms are typically deployed using cloud-based inferences, which may not meet the reliability and security requirements for ECG monitoring. A newer solution, edge inference, has been developed to address speed, security, connection, and reliability issues. This paper presents an edge-based algorithm that combines continuous wavelet transform (CWT), and short-time Fourier transform (STFT), in a hybrid convolutional neural network (CNN) and Long Short-Term Memory (LSTM) model techniques for real-time ECG classification and arrhythmia detection. The algorithm incorporates an STFT CWT-based 1D convolutional (Conv1D) layer as a Finite Impulse Response (FIR) filter to generate the spectrogram of the input ECG signal. The output feature maps from the Conv1D layer are then reshaped into a 2D heart map image and fed into a hybrid convolutional neural network (2D-CNN) and Long Short-Term Memory (LSTM) classification model. The MIT-BIH arrhythmia database is used to train and evaluate the model. Using a cloud platform, four model versions are learned, considered, and optimized for edge computing on a Raspberry Pi device. Techniques such as weight quantization and pruning enhance the algorithms created for edge inference. The proposed classifiers can operate with a total target size of 90 KB, an overall inference time of 9 ms, and higher memory use of 12 MB while achieving up to 99.6% classification accuracy and a 99.88% F1-score at the edge. Thanks to its results, the suggested classifier is highly versatile and can be used for arrhythmia monitoring on various edge devices.

心血管疾病的诊断在很大程度上依赖于心电图(ECG)的自动分类,以监测心律失常,而这通常是通过机器学习(ML)算法来实现的。然而,目前的 ML 算法通常使用基于云的推理进行部署,这可能无法满足心电图监测的可靠性和安全性要求。为了解决速度、安全性、连接和可靠性等问题,人们开发了一种更新的解决方案--边缘推理。本文介绍了一种基于边缘的算法,该算法在混合卷积神经网络(CNN)和长短期记忆(LSTM)模型技术中结合了连续小波变换(CWT)和短时傅立叶变换(STFT),用于实时心电图分类和心律失常检测。该算法将基于 STFT CWT 的一维卷积(Conv1D)层作为有限脉冲响应(FIR)滤波器,生成输入心电信号的频谱图。然后将 Conv1D 层的输出特征图重塑为二维心脏图图像,并输入混合卷积神经网络(2D-CNN)和长短期记忆(LSTM)分类模型。MIT-BIH 心律失常数据库用于训练和评估该模型。利用云平台,在 Raspberry Pi 设备上针对边缘计算学习、考虑和优化了四个模型版本。权重量化和剪枝等技术增强了为边缘推理创建的算法。建议的分类器可以在总目标大小为 90 KB、整体推理时间为 9 ms、内存使用量为 12 MB 的情况下运行,同时在边缘实现高达 99.6% 的分类准确率和 99.88% 的 F1 分数。由于其结果,建议的分类器具有很强的通用性,可用于各种边缘设备的心律失常监测。
{"title":"Novel Transformation Deep Learning Model for Electrocardiogram Classification and Arrhythmia Detection using Edge Computing","authors":"Yibo Han, Pu Han, Bo Yuan, Zheng Zhang, Lu Liu, John Panneerselvam","doi":"10.1007/s10723-023-09717-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09717-3","url":null,"abstract":"<p>The diagnosis of the cardiovascular disease relies heavily on the automated classification of electrocardiograms (ECG) for arrhythmia monitoring, which is often performed using machine learning (ML) algorithms. However, current ML algorithms are typically deployed using cloud-based inferences, which may not meet the reliability and security requirements for ECG monitoring. A newer solution, edge inference, has been developed to address speed, security, connection, and reliability issues. This paper presents an edge-based algorithm that combines continuous wavelet transform (CWT), and short-time Fourier transform (STFT), in a hybrid convolutional neural network (CNN) and Long Short-Term Memory (LSTM) model techniques for real-time ECG classification and arrhythmia detection. The algorithm incorporates an STFT CWT-based 1D convolutional (Conv1D) layer as a Finite Impulse Response (FIR) filter to generate the spectrogram of the input ECG signal. The output feature maps from the Conv1D layer are then reshaped into a 2D heart map image and fed into a hybrid convolutional neural network (2D-CNN) and Long Short-Term Memory (LSTM) classification model. The MIT-BIH arrhythmia database is used to train and evaluate the model. Using a cloud platform, four model versions are learned, considered, and optimized for edge computing on a Raspberry Pi device. Techniques such as weight quantization and pruning enhance the algorithms created for edge inference. The proposed classifiers can operate with a total target size of 90 KB, an overall inference time of 9 ms, and higher memory use of 12 MB while achieving up to 99.6% classification accuracy and a 99.88% F1-score at the edge. Thanks to its results, the suggested classifier is highly versatile and can be used for arrhythmia monitoring on various edge devices.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomaly Detection in Cloud Computing using Knowledge Graph Embedding and Machine Learning Mechanisms 利用知识图谱嵌入和机器学习机制进行云计算异常检测
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2023-12-29 DOI: 10.1007/s10723-023-09727-1
Katerina Mitropoulou, Panagiotis Kokkinos, Polyzois Soumplis, Emmanouel Varvarigos

The orchestration of cloud computing infrastructures is challenging, considering the number, heterogeneity and dynamicity of the involved resources, along with the highly distributed nature of the applications that use them for computation and storage. Evidently, the volume of relevant monitoring data can be significant, and the ability to collect, analyze, and act on this data in real time is critical for the infrastructure’s efficient use. In this study, we introduce a novel methodology that adeptly manages the diverse, dynamic, and voluminous nature of cloud resources and the applications that they support. We use knowledge graphs to represent computing and storage resources and illustrate the relationships between them and the applications that utilize them. We then train GraphSAGE to acquire vector-based representations of the infrastructures’ properties, while preserving the structural properties of the graph. These are efficiently provided as input to two unsupervised machine learning algorithms, namely CBLOF and Isolation Forest, for the detection of storage and computing overusage events, where CBLOF demonstrates better performance across all our evaluation metrics. Following the detection of such events, we have also developed appropriate re-optimization mechanisms that ensure the performance of the served applications. Evaluated in a simulated environment, our methods demonstrate a significant advancement in anomaly detection and infrastructure optimization. The results underscore the potential of this closed-loop operation in dynamically adapting to the evolving demands of cloud infrastructures. By integrating data representation and machine learning methods with proactive management strategies, this research contributes substantially to the field of cloud computing, offering a scalable, intelligent solution for modern cloud infrastructures.

考虑到相关资源的数量、异构性和动态性,以及使用这些资源进行计算和存储的应用程序的高度分布性,云计算基础设施的协调工作极具挑战性。显而易见,相关的监控数据量可能非常大,而实时收集、分析和处理这些数据的能力对于基础设施的高效利用至关重要。在本研究中,我们介绍了一种新颖的方法,该方法能有效管理云资源及其支持的应用程序的多样性、动态性和海量性。我们使用知识图谱来表示计算和存储资源,并说明它们与使用它们的应用程序之间的关系。然后,我们对 GraphSAGE 进行训练,以获取基于向量的基础设施属性表示,同时保留图的结构属性。这些信息被有效地作为输入提供给两种无监督机器学习算法,即 CBLOF 和 Isolation Forest,用于检测存储和计算过度使用事件。在检测到此类事件后,我们还开发了适当的重新优化机制,以确保所服务应用程序的性能。在模拟环境中进行的评估表明,我们的方法在异常检测和基础设施优化方面取得了重大进展。结果凸显了这种闭环操作在动态适应云基础设施不断变化的需求方面的潜力。通过将数据表示和机器学习方法与前瞻性管理策略相结合,这项研究为云计算领域做出了重大贡献,为现代云基础设施提供了可扩展的智能解决方案。
{"title":"Anomaly Detection in Cloud Computing using Knowledge Graph Embedding and Machine Learning Mechanisms","authors":"Katerina Mitropoulou, Panagiotis Kokkinos, Polyzois Soumplis, Emmanouel Varvarigos","doi":"10.1007/s10723-023-09727-1","DOIUrl":"https://doi.org/10.1007/s10723-023-09727-1","url":null,"abstract":"<p>The orchestration of cloud computing infrastructures is challenging, considering the number, heterogeneity and dynamicity of the involved resources, along with the highly distributed nature of the applications that use them for computation and storage. Evidently, the volume of relevant monitoring data can be significant, and the ability to collect, analyze, and act on this data in real time is critical for the infrastructure’s efficient use. In this study, we introduce a novel methodology that adeptly manages the diverse, dynamic, and voluminous nature of cloud resources and the applications that they support. We use knowledge graphs to represent computing and storage resources and illustrate the relationships between them and the applications that utilize them. We then train GraphSAGE to acquire vector-based representations of the infrastructures’ properties, while preserving the structural properties of the graph. These are efficiently provided as input to two unsupervised machine learning algorithms, namely CBLOF and Isolation Forest, for the detection of storage and computing overusage events, where CBLOF demonstrates better performance across all our evaluation metrics. Following the detection of such events, we have also developed appropriate re-optimization mechanisms that ensure the performance of the served applications. Evaluated in a simulated environment, our methods demonstrate a significant advancement in anomaly detection and infrastructure optimization. The results underscore the potential of this closed-loop operation in dynamically adapting to the evolving demands of cloud infrastructures. By integrating data representation and machine learning methods with proactive management strategies, this research contributes substantially to the field of cloud computing, offering a scalable, intelligent solution for modern cloud infrastructures.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139069793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1