首页 > 最新文献

IEEE Transactions on Sustainable Computing最新文献

英文 中文
Self-Optimizing the Environmental Sustainability of Blockchain-Based Systems 区块链系统的环境可持续性自我优化
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-10-19 DOI: 10.1109/TSUSC.2023.3325881
Akram Alofi;Mahmoud A. Bokhari;Rami Bahsoon;Robert Hendley
Blockchain technology has been widely adopted in many areas to provide more dependable and trustworthy systems, including digital infrastructure. Nevertheless, its widespread implementation is accompanied by significant environmental concerns, as it is considered a substantial contributor to greenhouse gas emissions. This environmental impact is mainly attributed to the inherent inefficiencies of its consensus algorithms, notably Proof of Work, which demands substantial computational power for trust establishment. This paper proposes a novel self-adaptive model to optimize the environmental sustainability of blockchain-based systems, addressing energy consumption and carbon emission without compromising the fundamental properties of blockchain technology. The model continuously monitors a blockchain-based system and adaptively selects miners, considering context changes and user needs. It dynamically selects a subset of miners to perform sustainable mining processes while ensuring the decentralization and trustworthiness of the system. The aim is to minimize blockchain-based systems' energy consumption and carbon emissions while maximizing their decentralization and trustworthiness. We conduct experiments to evaluate the efficiency and effectiveness of the model. The results show that our self-optimizing model can reduce energy consumption by 55.49% and carbon emissions by 71.25% on average while maintaining desirable levels of decentralization and trustworthiness by more than 96.08% and 75.12%, respectively. Furthermore, these enhancements can be achieved under different operating conditions compared to similar models, including the straightforward use of Proof of Work. Also, we have investigated and discussed the correlation between these objectives and how they are related to the number of miners within the blockchain-based systems.
区块链技术已被广泛应用于许多领域,以提供更可靠、更可信的系统,包括数字基础设施。然而,区块链技术的广泛应用也伴随着严重的环境问题,因为它被认为是温室气体排放的主要来源。这种环境影响主要归因于其共识算法的固有低效性,尤其是工作量证明(Proof of Work),它需要大量的计算能力来建立信任。本文提出了一种新颖的自适应模型,以优化基于区块链的系统的环境可持续性,在不损害区块链技术基本特性的情况下解决能源消耗和碳排放问题。该模型可持续监控基于区块链的系统,并根据环境变化和用户需求自适应地选择矿工。它动态选择矿工子集,以执行可持续的采矿流程,同时确保系统的去中心化和可信度。这样做的目的是最大限度地减少基于区块链的系统的能耗和碳排放,同时最大限度地提高其去中心化程度和可信度。我们通过实验来评估该模型的效率和有效性。结果表明,我们的自我优化模型可以平均减少 55.49% 的能源消耗和 71.25% 的碳排放,同时保持理想的去中心化和可信度水平,分别超过 96.08% 和 75.12%。此外,与类似模型相比,这些改进可以在不同的运行条件下实现,包括直接使用 "工作证明"。此外,我们还研究并讨论了这些目标之间的相关性,以及它们与基于区块链的系统中矿工数量的关系。
{"title":"Self-Optimizing the Environmental Sustainability of Blockchain-Based Systems","authors":"Akram Alofi;Mahmoud A. Bokhari;Rami Bahsoon;Robert Hendley","doi":"10.1109/TSUSC.2023.3325881","DOIUrl":"10.1109/TSUSC.2023.3325881","url":null,"abstract":"Blockchain technology has been widely adopted in many areas to provide more dependable and trustworthy systems, including digital infrastructure. Nevertheless, its widespread implementation is accompanied by significant environmental concerns, as it is considered a substantial contributor to greenhouse gas emissions. This environmental impact is mainly attributed to the inherent inefficiencies of its consensus algorithms, notably Proof of Work, which demands substantial computational power for trust establishment. This paper proposes a novel self-adaptive model to optimize the environmental sustainability of blockchain-based systems, addressing energy consumption and carbon emission without compromising the fundamental properties of blockchain technology. The model continuously monitors a blockchain-based system and adaptively selects miners, considering context changes and user needs. It dynamically selects a subset of miners to perform sustainable mining processes while ensuring the decentralization and trustworthiness of the system. The aim is to minimize blockchain-based systems' energy consumption and carbon emissions while maximizing their decentralization and trustworthiness. We conduct experiments to evaluate the efficiency and effectiveness of the model. The results show that our self-optimizing model can reduce energy consumption by 55.49% and carbon emissions by 71.25% on average while maintaining desirable levels of decentralization and trustworthiness by more than 96.08% and 75.12%, respectively. Furthermore, these enhancements can be achieved under different operating conditions compared to similar models, including the straightforward use of Proof of Work. Also, we have investigated and discussed the correlation between these objectives and how they are related to the number of miners within the blockchain-based systems.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 3","pages":"396-408"},"PeriodicalIF":3.9,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135058244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Type Charging Scheduling Based on Area Requirement Difference for Wireless Rechargeable Sensor Networks 基于区域需求差异的无线充电传感器网络多类型充电调度
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-10-17 DOI: 10.1109/TSUSC.2023.3325237
Yang Yang;Xuxun Liu;Kun Tang;Wenquan Che;Quan Xue
Charging scheduling plays a crucial role in ensuring durable operation for wireless rechargeable sensor networks. However, previous methods cannot meet the strict requirements of a high node survival rate and high energy usage effectiveness. In this article, we propose a multi-type charging scheduling strategy to meet such demands. In this strategy, the network is divided into an inner ring and an outer ring to satisfy different demands in different areas. The inner ring forms a flat topology, and adopts a periodic and single-node charging pattern mainly for a high node survival rate. A space priority and a time priority are designed to determine the charging sequence of the nodes. The optimal charging cycle and the optimal charging time are achieved by mathematical derivations. The outer ring forms a cluster topology, and adopts an on-demand and multi-node charging pattern mainly for high energy usage effectiveness. A space balancing principle and a time balancing principle are designed to determine the charging positions of the clusters. A gravitational search algorithm is designed to determine the charging sequence of the clusters. Several simulations verify the advantages of the proposed solution in terms of energy usage effectiveness, charging failure rate, and average task delay.
充电调度对确保无线充电传感器网络的持久运行起着至关重要的作用。然而,以往的方法无法满足节点高存活率和高能量使用效率的严格要求。本文提出了一种多类型充电调度策略来满足这些要求。在该策略中,网络被分为内环和外环,以满足不同区域的不同需求。内环形成扁平拓扑,采用周期性单节点充电模式,主要是为了提高节点存活率。设计了空间优先级和时间优先级来决定节点的充电顺序。最佳充电周期和最佳充电时间是通过数学推导实现的。外环形成集群拓扑,采用按需和多节点充电模式,主要是为了提高能量使用效率。设计了空间平衡原理和时间平衡原理来确定集群的充电位置。设计了引力搜索算法来确定集群的充电顺序。多次模拟验证了所提方案在能源使用效率、充电失败率和平均任务延迟方面的优势。
{"title":"Multi-Type Charging Scheduling Based on Area Requirement Difference for Wireless Rechargeable Sensor Networks","authors":"Yang Yang;Xuxun Liu;Kun Tang;Wenquan Che;Quan Xue","doi":"10.1109/TSUSC.2023.3325237","DOIUrl":"10.1109/TSUSC.2023.3325237","url":null,"abstract":"Charging scheduling plays a crucial role in ensuring durable operation for wireless rechargeable sensor networks. However, previous methods cannot meet the strict requirements of a high node survival rate and high energy usage effectiveness. In this article, we propose a multi-type charging scheduling strategy to meet such demands. In this strategy, the network is divided into an inner ring and an outer ring to satisfy different demands in different areas. The inner ring forms a flat topology, and adopts a periodic and single-node charging pattern mainly for a high node survival rate. A space priority and a time priority are designed to determine the charging sequence of the nodes. The optimal charging cycle and the optimal charging time are achieved by mathematical derivations. The outer ring forms a cluster topology, and adopts an on-demand and multi-node charging pattern mainly for high energy usage effectiveness. A space balancing principle and a time balancing principle are designed to determine the charging positions of the clusters. A gravitational search algorithm is designed to determine the charging sequence of the clusters. Several simulations verify the advantages of the proposed solution in terms of energy usage effectiveness, charging failure rate, and average task delay.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 2","pages":"182-196"},"PeriodicalIF":3.9,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135007433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPGA Implementation of Classical Dynamic Neural Networks for Smooth and Nonsmooth Optimization Problems 针对平滑和非平滑优化问题的经典动态神经网络的 FPGA 实现
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-10-17 DOI: 10.1109/TSUSC.2023.3325268
Renfeng Xiao;Xing He;Tingwen Huang;Junzhi Yu
In this paper, a novel Field-Programmable-Gate-Array (FPGA) implementation framework based on Lagrange programming neural network (LPNN), projection neural network (PNN) and proximal projection neural network (PPNN) is proposed which can be used to solve smooth and nonsmooth optimization problems. First, Count Unit (CU) and Calculate Unit (CaU) are designed for smooth problems with equality constraints, and these units are used to simulate the iteration actions of neural network (NN) and form a feedback loop with other basic digital circuit operations. Then, the optimal solutions of optimization problems are mapped by the output waveforms. Second, the digital circuit structures of Path Select Unit (PSU), projection operator and proximal operator are further designed to process the box constraints and nonsmooth terms, respectively. Finally, the effectiveness and feasibility of the circuit are verified by three numerical examples on the Quartus II 13.0 sp1 platform with the Cyclone IV E series chip EP4CE10F17C8.
本文提出了一种基于拉格朗日编程神经网络(LPNN)、投影神经网络(PNN)和近端投影神经网络(PPNN)的新型现场可编程门阵列(FPGA)实现框架,可用于解决平滑和非平滑优化问题。首先,针对具有相等约束条件的平滑问题设计了计数单元(CU)和计算单元(CaU),这些单元用于模拟神经网络(NN)的迭代动作,并与其他基本数字电路操作形成反馈回路。然后,通过输出波形映射出优化问题的最优解。其次,进一步设计了路径选择单元(PSU)、投影算子和近似算子的数字电路结构,以分别处理盒式约束和非光滑项。最后,在使用 Cyclone IV E 系列芯片 EP4CE10F17C8 的 Quartus II 13.0 sp1 平台上,通过三个数值实例验证了电路的有效性和可行性。
{"title":"FPGA Implementation of Classical Dynamic Neural Networks for Smooth and Nonsmooth Optimization Problems","authors":"Renfeng Xiao;Xing He;Tingwen Huang;Junzhi Yu","doi":"10.1109/TSUSC.2023.3325268","DOIUrl":"10.1109/TSUSC.2023.3325268","url":null,"abstract":"In this paper, a novel Field-Programmable-Gate-Array (FPGA) implementation framework based on Lagrange programming neural network (LPNN), projection neural network (PNN) and proximal projection neural network (PPNN) is proposed which can be used to solve smooth and nonsmooth optimization problems. First, Count Unit (CU) and Calculate Unit (CaU) are designed for smooth problems with equality constraints, and these units are used to simulate the iteration actions of neural network (NN) and form a feedback loop with other basic digital circuit operations. Then, the optimal solutions of optimization problems are mapped by the output waveforms. Second, the digital circuit structures of Path Select Unit (PSU), projection operator and proximal operator are further designed to process the box constraints and nonsmooth terms, respectively. Finally, the effectiveness and feasibility of the circuit are verified by three numerical examples on the Quartus II 13.0 sp1 platform with the Cyclone IV E series chip EP4CE10F17C8.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 2","pages":"197-208"},"PeriodicalIF":3.9,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135002398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DNN-SNN Co-Learning for Sustainable Symbol Detection in 5G Systems on Loihi Chip 利用 DNN-SNN 协同学习实现 Loihi 芯片 5G 系统中的可持续符号检测
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-10-13 DOI: 10.1109/TSUSC.2023.3324339
Shiya Liu;Yibin Liang;Yang Yi
Performing symbol detection for multiple-input and multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is challenging and resource-consuming. In this paper, we present a liquid state machine (LSM), a type of reservoir computing based on spiking neural networks (SNNs), to achieve energy-efficient and sustainable symbol detection on the Loihi chip for MIMO-OFDM systems. SNNs are more biological-plausible and energy-efficient than conventional deep neural networks (DNN) but have lower performance in terms of accuracy. To enhance the accuracy of SNNs, we propose a knowledge distillation training algorithm called DNN-SNN co-learning, which employs a bi-directional learning path between a DNN and an SNN. Specifically, the knowledge from the output and intermediate layer of the DNN is transferred to the SNN, and we exploit a decoder to convert the spikes in the intermediate layers of an SNN into real numbers to enable communication between the DNN and the SNN. Through the bi-directional learning path, the SNN can mimic the behavior of the DNN by learning the knowledge from the DNN. Conversely, the DNN can better adapt itself to the SNN by using the knowledge from the SNN. We introduce a new loss function to enable knowledge distillation on regression tasks. Our LSM is implemented on Intel's Loihi neuromorphic chip, a specialized hardware platform for SNN models. The experimental results on symbol detection in MIMO-OFDM systems demonstrate that our LSM on the Loihi chip is more precise than conventional symbol detection algorithms. Also, the model consumes approximately 6 times less energy per sample than other quantized DNN-based models with comparable accuracy.
在多输入多输出正交频分复用(MIMO-OFDM)系统中进行符号检测既具有挑战性又耗费资源。在本文中,我们提出了一种基于尖峰神经网络(SNN)的存储计算--液态机(LSM),以在 Loihi 芯片上实现 MIMO-OFDM 系统的高能效和可持续符号检测。与传统的深度神经网络(DNN)相比,SNN更具生物合理性和能效,但在准确性方面性能较低。为了提高 SNN 的准确性,我们提出了一种称为 DNN-SNN 协同学习的知识提炼训练算法,它采用了 DNN 和 SNN 之间的双向学习路径。具体来说,我们将 DNN 输出层和中间层的知识转移到 SNN,并利用解码器将 SNN 中间层的尖峰转换为实数,从而实现 DNN 和 SNN 之间的通信。通过双向学习路径,SNN 可以通过学习 DNN 的知识来模仿 DNN 的行为。反之,DNN 可以通过使用 SNN 的知识更好地适应 SNN。我们引入了一个新的损失函数,以实现回归任务的知识提炼。我们的 LSM 是在英特尔的 Loihi 神经形态芯片上实现的,该芯片是 SNN 模型的专用硬件平台。MIMO-OFDM 系统中符号检测的实验结果表明,Loihi 芯片上的 LSM 比传统符号检测算法更加精确。此外,与精度相当的其他基于 DNN 的量化模型相比,该模型每次采样消耗的能量大约少 6 倍。
{"title":"DNN-SNN Co-Learning for Sustainable Symbol Detection in 5G Systems on Loihi Chip","authors":"Shiya Liu;Yibin Liang;Yang Yi","doi":"10.1109/TSUSC.2023.3324339","DOIUrl":"10.1109/TSUSC.2023.3324339","url":null,"abstract":"Performing symbol detection for multiple-input and multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is challenging and resource-consuming. In this paper, we present a liquid state machine (LSM), a type of reservoir computing based on spiking neural networks (SNNs), to achieve energy-efficient and sustainable symbol detection on the Loihi chip for MIMO-OFDM systems. SNNs are more biological-plausible and energy-efficient than conventional deep neural networks (DNN) but have lower performance in terms of accuracy. To enhance the accuracy of SNNs, we propose a knowledge distillation training algorithm called DNN-SNN co-learning, which employs a bi-directional learning path between a DNN and an SNN. Specifically, the knowledge from the output and intermediate layer of the DNN is transferred to the SNN, and we exploit a decoder to convert the spikes in the intermediate layers of an SNN into real numbers to enable communication between the DNN and the SNN. Through the bi-directional learning path, the SNN can mimic the behavior of the DNN by learning the knowledge from the DNN. Conversely, the DNN can better adapt itself to the SNN by using the knowledge from the SNN. We introduce a new loss function to enable knowledge distillation on regression tasks. Our LSM is implemented on Intel's Loihi neuromorphic chip, a specialized hardware platform for SNN models. The experimental results on symbol detection in MIMO-OFDM systems demonstrate that our LSM on the Loihi chip is more precise than conventional symbol detection algorithms. Also, the model consumes approximately 6 times less energy per sample than other quantized DNN-based models with comparable accuracy.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 2","pages":"170-181"},"PeriodicalIF":3.9,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136303279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-Free GPU Online Energy Optimization 无模型 GPU 在线能源优化
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-09-13 DOI: 10.1109/TSUSC.2023.3314916
Farui Wang;Meng Hao;Weizhe Zhang;Zheng Wang
GPUs play a central and indispensable role as accelerators in modern high-performance computing (HPC) platforms, enabling a wide range of tasks to be performed efficiently. However, the use of GPUs also results in significant energy consumption and carbon dioxide (CO2) emissions. This article presents MF-GPOEO, a model-free GPU online energy efficiency optimization framework. MF-GPOEO leverages a synthetic performance index and a PID controller to dynamically determine the optimal clock frequency configuration for GPUs. It profiles GPU kernel activity information under different frequency configurations and then compares GPU kernel execution time and gap duration between kernels to derive the synthetic performance index. With the performance index and measured average power, MF-GPOEO can use the PID controller to try different frequency configurations and find the optimal frequency configuration under the guidance of user-defined objective functions. We evaluate the MF-GPOEO by running it with 74 applications on an NVIDIA RTX3080Ti GPU. MF-GPOEO delivers a mean energy saving of 26.2% with a slight average execution time increase of 3.4% compared with NVIDIA's default clock scheduling strategy.
在现代高性能计算(HPC)平台中,GPU 作为加速器发挥着不可或缺的核心作用,使各种任务得以高效执行。然而,GPU 的使用也导致了大量的能源消耗和二氧化碳(CO2)排放。本文介绍了无模型 GPU 在线能效优化框架 MF-GPOEO。MF-GPOEO 利用合成性能指标和 PID 控制器来动态确定 GPU 的最佳时钟频率配置。它剖析不同频率配置下的 GPU 内核活动信息,然后比较 GPU 内核执行时间和内核之间的间隙持续时间,从而得出合成性能指数。有了性能指数和测得的平均功率,MF-GPOEO 就能使用 PID 控制器尝试不同的频率配置,并在用户自定义目标函数的指导下找到最佳频率配置。我们通过在英伟达 RTX3080Ti GPU 上运行 74 个应用程序对 MF-GPOEO 进行了评估。与英伟达默认的时钟调度策略相比,MF-GPOEO 平均节能 26.2%,平均执行时间略微增加 3.4%。
{"title":"Model-Free GPU Online Energy Optimization","authors":"Farui Wang;Meng Hao;Weizhe Zhang;Zheng Wang","doi":"10.1109/TSUSC.2023.3314916","DOIUrl":"10.1109/TSUSC.2023.3314916","url":null,"abstract":"GPUs play a central and indispensable role as accelerators in modern high-performance computing (HPC) platforms, enabling a wide range of tasks to be performed efficiently. However, the use of GPUs also results in significant energy consumption and carbon dioxide (CO2) emissions. This article presents MF-GPOEO, a model-free GPU online energy efficiency optimization framework. MF-GPOEO leverages a synthetic performance index and a PID controller to dynamically determine the optimal clock frequency configuration for GPUs. It profiles GPU kernel activity information under different frequency configurations and then compares GPU kernel execution time and gap duration between kernels to derive the synthetic performance index. With the performance index and measured average power, MF-GPOEO can use the PID controller to try different frequency configurations and find the optimal frequency configuration under the guidance of user-defined objective functions. We evaluate the MF-GPOEO by running it with 74 applications on an NVIDIA RTX3080Ti GPU. MF-GPOEO delivers a mean energy saving of 26.2% with a slight average execution time increase of 3.4% compared with NVIDIA's default clock scheduling strategy.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 2","pages":"141-154"},"PeriodicalIF":3.9,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135402321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability Enhancement Strategies for Workflow Scheduling Under Energy Consumption Constraints in Clouds 云中能耗约束下工作流调度的可靠性增强策略
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-09-12 DOI: 10.1109/TSUSC.2023.3314759
Longxin Zhang;Minghui Ai;Ke Liu;Jianguo Chen;Kenli Li
As the demand for Big Data analysis and artificial intelligence technology continues to surge, a significant amount of research has been conducted on cloud computing services. An effective workflow scheduling strategy stands as the pivotal factor in ensuring the quality of cloud services. Dynamic voltage and frequency scaling (DVFS) is an effective energy-saving technology that is extensively used in the development of workflow scheduling algorithms. However, DVFS reduces the processor's running frequency, which increases the possibility of soft errors in workflow execution, thereby lowering the workflow execution reliability. This study proposes an energy-aware reliability enhancement scheduling (EARES) method with a checkpoint mechanism to improve system reliability while meeting the workflow deadline and the energy consumption constraints. The proposed EARES algorithm consists of three phases, namely, workflow application initialization, deadline partitioning, and energy partitioning and virtual machine selection. Numerous experiments are conducted to assess the performance of the EARES algorithm using three real-world scientific workflows. Experimental results demonstrate that the EARES algorithm remarkably improves reliability in comparison with other state-of-the-art algorithms while meeting the deadline and satisfying the energy consumption requirement.
随着对大数据分析和人工智能技术的需求不断激增,人们对云计算服务进行了大量研究。有效的工作流调度策略是确保云计算服务质量的关键因素。动态电压和频率缩放(DVFS)是一种有效的节能技术,被广泛应用于工作流调度算法的开发中。然而,DVFS 降低了处理器的运行频率,增加了工作流执行过程中出现软错误的可能性,从而降低了工作流执行的可靠性。本研究提出了一种带有检查点机制的能量感知可靠性增强调度(EARES)方法,以在满足工作流截止日期和能耗约束的同时提高系统可靠性。提出的 EARES 算法包括三个阶段,即工作流应用初始化、截止日期划分、能量划分和虚拟机选择。为了评估 EARES 算法的性能,我们使用三个真实世界的科学工作流进行了大量实验。实验结果表明,与其他最先进的算法相比,EARES 算法在满足截止日期和能耗要求的同时显著提高了可靠性。
{"title":"Reliability Enhancement Strategies for Workflow Scheduling Under Energy Consumption Constraints in Clouds","authors":"Longxin Zhang;Minghui Ai;Ke Liu;Jianguo Chen;Kenli Li","doi":"10.1109/TSUSC.2023.3314759","DOIUrl":"10.1109/TSUSC.2023.3314759","url":null,"abstract":"As the demand for Big Data analysis and artificial intelligence technology continues to surge, a significant amount of research has been conducted on cloud computing services. An effective workflow scheduling strategy stands as the pivotal factor in ensuring the quality of cloud services. Dynamic voltage and frequency scaling (DVFS) is an effective energy-saving technology that is extensively used in the development of workflow scheduling algorithms. However, DVFS reduces the processor's running frequency, which increases the possibility of soft errors in workflow execution, thereby lowering the workflow execution reliability. This study proposes an energy-aware reliability enhancement scheduling (EARES) method with a checkpoint mechanism to improve system reliability while meeting the workflow deadline and the energy consumption constraints. The proposed EARES algorithm consists of three phases, namely, workflow application initialization, deadline partitioning, and energy partitioning and virtual machine selection. Numerous experiments are conducted to assess the performance of the EARES algorithm using three real-world scientific workflows. Experimental results demonstrate that the EARES algorithm remarkably improves reliability in comparison with other state-of-the-art algorithms while meeting the deadline and satisfying the energy consumption requirement.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 2","pages":"155-169"},"PeriodicalIF":3.9,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135401489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computation Offloading for Energy Efficiency Maximization of Sustainable Energy Supply Network in IIoT 为实现物联网中可持续能源供应网络的能效最大化而进行计算卸载
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-09-11 DOI: 10.1109/TSUSC.2023.3313770
Zhao Tong;Jinhui Cai;Jing Mei;Kenli Li;Keqin Li
The efficiency of production and equipment maintenance costs in the Industrial Internet of Things (IIoT) are directly impacted by equipment lifetime, making it an important concern. Mobile edge computing (MEC) can enhance network performance, extend device lifetime, and effectively reduce carbon emissions by integrating energy harvesting (EH) technology. However, when the two are combined, the coupling effect of energy and the system's communication resource management pose a great challenge to the development of computational offloading strategies. This paper investigates the problem of maximizing the energy efficiency of computation offloading in a two-tier MEC network powered by wireless power transfer (WPT). First, the corresponding mathematical models are developed for local computing, edge server processing, communication, and EH. The proposed fractional problem is transformed into a stochastic optimization problem by Dinkelbach method. In addition, virtual power queues are introduced to eliminate energy coupling effects by maintaining the stability of the battery power queues. Next, the problem is then resolved through the utilization of both Lyapunov optimization and convex optimization method. Consequently, a wireless energy transmission-based algorithm for maximizing energy efficiency is proposed. Finally, energy efficiency, an important parameter of network performance, is used as an indicator. The excellent performance of the EEMA-WET algorithm is verified through extensive extension and comparison experiments.
工业物联网(IIoT)中的生产效率和设备维护成本直接受到设备寿命的影响,因此成为一个重要的关注点。移动边缘计算(MEC)通过集成能量收集(EH)技术,可以提高网络性能、延长设备寿命并有效减少碳排放。然而,当二者结合在一起时,能量和系统通信资源管理的耦合效应给计算卸载策略的开发带来了巨大挑战。本文研究了在以无线功率传输(WPT)为动力的两层 MEC 网络中计算卸载的能效最大化问题。首先,针对本地计算、边缘服务器处理、通信和 EH 建立了相应的数学模型。利用 Dinkelbach 方法将提出的分数问题转化为随机优化问题。此外,还引入了虚拟电源队列,通过保持电池电源队列的稳定性来消除能量耦合效应。然后,利用 Lyapunov 优化和凸优化方法解决该问题。最后,提出了一种基于无线能量传输的算法,以实现能量效率的最大化。最后,以网络性能的重要参数--能效作为指标。通过广泛的扩展和对比实验,验证了 EEMA-WET 算法的卓越性能。
{"title":"Computation Offloading for Energy Efficiency Maximization of Sustainable Energy Supply Network in IIoT","authors":"Zhao Tong;Jinhui Cai;Jing Mei;Kenli Li;Keqin Li","doi":"10.1109/TSUSC.2023.3313770","DOIUrl":"10.1109/TSUSC.2023.3313770","url":null,"abstract":"The efficiency of production and equipment maintenance costs in the Industrial Internet of Things (IIoT) are directly impacted by equipment lifetime, making it an important concern. Mobile edge computing (MEC) can enhance network performance, extend device lifetime, and effectively reduce carbon emissions by integrating energy harvesting (EH) technology. However, when the two are combined, the coupling effect of energy and the system's communication resource management pose a great challenge to the development of computational offloading strategies. This paper investigates the problem of maximizing the energy efficiency of computation offloading in a two-tier MEC network powered by wireless power transfer (WPT). First, the corresponding mathematical models are developed for local computing, edge server processing, communication, and EH. The proposed fractional problem is transformed into a stochastic optimization problem by Dinkelbach method. In addition, virtual power queues are introduced to eliminate energy coupling effects by maintaining the stability of the battery power queues. Next, the problem is then resolved through the utilization of both Lyapunov optimization and convex optimization method. Consequently, a wireless energy transmission-based algorithm for maximizing energy efficiency is proposed. Finally, energy efficiency, an important parameter of network performance, is used as an indicator. The excellent performance of the EEMA-WET algorithm is verified through extensive extension and comparison experiments.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 2","pages":"128-140"},"PeriodicalIF":3.9,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135361324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morph-GCNX: A Universal Architecture for High-Performance and Energy-Efficient Graph Convolutional Network Acceleration Morph-GCNX:高性能、高能效图卷积网络加速通用架构
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-09-11 DOI: 10.1109/TSUSC.2023.3313880
Ke Wang;Hao Zheng;Jiajun Li;Ahmed Louri
While current Graph Convolutional Networks (GCNs) accelerators have achieved notable success in a wide range of application domains, these GCN accelerators can not support various intra- and inter- GCN dataflows or adapt to diverse GCN applications. In this paper, we propose Morph-GCNX, a flexible GCN accelerator architecture for high-performance and energy-efficient GCN execution. The proposed design consists of a flexible Processing Element (PE) array that can be partitioned at runtime and adapt to the computational needs of different layers within a GCN or multiple concurrent GCNs. The proposed Morph-GCNX also consists of a morphable interconnection design to support a wide range of GCN dataflows with various parallelization and data reuse strategies for GCN execution. We also propose a hardware-application co-exploration technique that explores the GCN and hardware design spaces to identify the best PE partition, workload allocation, dataflow, and interconnection configurations, with the goal of improving overall performance and energy. Simulation results show that the proposed Morph-GCNX architecture achieves 18.8×, 2.9×, 1.9×, 1.8×, and 2.5× better performance, reduces DRAM accesses by a factor of 10.8×, 3.7×, 2.2×, 2.5×, and 1.3×, and improves energy consumption by 13.2×, 5.6×, 2.1×, 2.5×, and 1.3×, as compared to prior designs including HyGCN, AWB-GCN, LW-GCN, GCoD, and GCNAX, respectively.
虽然目前的图卷积网络(GCN)加速器在广泛的应用领域取得了显著的成功,但这些 GCN 加速器无法支持各种 GCN 内部和 GCN 之间的数据流,也无法适应各种 GCN 应用。在本文中,我们提出了一种灵活的 GCN 加速器架构 Morph-GCNX,用于高性能和高能效的 GCN 执行。拟议的设计由灵活的处理元件(PE)阵列组成,可在运行时进行分区,以适应 GCN 内不同层或多个并发 GCN 的计算需求。拟议的 Morph-GCNX 还包括一个可变形的互连设计,以支持各种 GCN 数据流,并为 GCN 执行提供各种并行化和数据重用策略。我们还提出了一种硬件应用共同探索技术,该技术可探索 GCN 和硬件设计空间,以确定最佳 PE 分区、工作负载分配、数据流和互连配置,从而提高整体性能和能耗。仿真结果表明,所提出的 Morph-GCNX 架构的性能分别提高了 18.8 倍、2.9 倍、1.9 倍、1.8 倍和 2.5 倍,DRAM 访问量分别减少了 10.8 倍、3.7 倍、2.与之前的设计(包括 HyGCN、AWB-GCN、LW-GCN、GCoD 和 GCNAX)相比,性能分别提高了 18.8 倍、2.9 倍、1.9 倍、1.8 倍和 2.5 倍,DRAM 访问次数分别减少了 10.8 倍、3.7 倍、2.2 倍、2.5 倍和 1.3 倍,能耗分别降低了 13.2 倍、5.6 倍、2.1 倍、2.5 倍和 1.3 倍。
{"title":"Morph-GCNX: A Universal Architecture for High-Performance and Energy-Efficient Graph Convolutional Network Acceleration","authors":"Ke Wang;Hao Zheng;Jiajun Li;Ahmed Louri","doi":"10.1109/TSUSC.2023.3313880","DOIUrl":"10.1109/TSUSC.2023.3313880","url":null,"abstract":"While current Graph Convolutional Networks (GCNs) accelerators have achieved notable success in a wide range of application domains, these GCN accelerators can not support various intra- and inter- GCN dataflows or adapt to diverse GCN applications. In this paper, we propose Morph-GCNX, a flexible GCN accelerator architecture for high-performance and energy-efficient GCN execution. The proposed design consists of a flexible Processing Element (PE) array that can be partitioned at runtime and adapt to the computational needs of different layers within a GCN or multiple concurrent GCNs. The proposed Morph-GCNX also consists of a morphable interconnection design to support a wide range of GCN dataflows with various parallelization and data reuse strategies for GCN execution. We also propose a hardware-application co-exploration technique that explores the GCN and hardware design spaces to identify the best PE partition, workload allocation, dataflow, and interconnection configurations, with the goal of improving overall performance and energy. Simulation results show that the proposed Morph-GCNX architecture achieves 18.8×, 2.9×, 1.9×, 1.8×, and 2.5× better performance, reduces DRAM accesses by a factor of 10.8×, 3.7×, 2.2×, 2.5×, and 1.3×, and improves energy consumption by 13.2×, 5.6×, 2.1×, 2.5×, and 1.3×, as compared to prior designs including HyGCN, AWB-GCN, LW-GCN, GCoD, and GCNAX, respectively.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 2","pages":"115-127"},"PeriodicalIF":3.9,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135361631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Safe Virtual Machine Scheduling Strategy for Energy Conservation and Privacy Protection of Server Clusters in Cloud Data Centers 云数据中心服务器集群节能与隐私保护的安全虚拟机调度策略
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-09-01 DOI: 10.1109/TSUSC.2023.3303637
Xiaoyun Han;Chaoxu Mu;Jiebei Zhu;Hongjie Jia
With the increasing scale of cloud data centers (CDCs), the energy consumption of CDCs is sharply increasing. In this article, an efficient energy-saving strategy is proposed for CDCs. The greedy virtual machine (VM) deployment strategy is obtained by using the least number of servers, the heuristic VM migration strategy is obtained by using the improved double threshold algorithm, and the comprehensive VM scheduling strategy of severs is obtained by combining deployment and migration strategies. Furthermore, for the privacy security of VM scheduling, a safety-oriented energy-saving scheme based on information difference is proposed to ensure the dataset availability under privacy protection, comparing with $varepsilon$-differential privacy algorithm and $(varepsilon, delta)$-differential privacy algorithm. Simulation results show that the safe energy-saving strategy can significantly reduce the energy consumption in CDCs with guaranteeing the security and availability of the important datasets.
随着云数据中心(CDC)规模的不断扩大,CDC 的能耗也在急剧增加。本文提出了一种针对云数据中心的高效节能策略。利用最少的服务器数量获得贪婪的虚拟机(VM)部署策略,利用改进的双阈值算法获得启发式的虚拟机迁移策略,结合部署和迁移策略获得全面的虚拟机调度策略。此外,针对虚拟机调度的隐私安全问题,对比$varepsilon$差分隐私算法和$(varepsilon, delta)$差分隐私算法,提出了一种基于信息差分的面向安全的节能方案,以确保隐私保护下的数据集可用性。仿真结果表明,在保证重要数据集的安全性和可用性的前提下,安全节能策略能显著降低 CDC 的能耗。
{"title":"A Safe Virtual Machine Scheduling Strategy for Energy Conservation and Privacy Protection of Server Clusters in Cloud Data Centers","authors":"Xiaoyun Han;Chaoxu Mu;Jiebei Zhu;Hongjie Jia","doi":"10.1109/TSUSC.2023.3303637","DOIUrl":"10.1109/TSUSC.2023.3303637","url":null,"abstract":"With the increasing scale of cloud data centers (CDCs), the energy consumption of CDCs is sharply increasing. In this article, an efficient energy-saving strategy is proposed for CDCs. The greedy virtual machine (VM) deployment strategy is obtained by using the least number of servers, the heuristic VM migration strategy is obtained by using the improved double threshold algorithm, and the comprehensive VM scheduling strategy of severs is obtained by combining deployment and migration strategies. Furthermore, for the privacy security of VM scheduling, a safety-oriented energy-saving scheme based on information difference is proposed to ensure the dataset availability under privacy protection, comparing with \u0000<inline-formula><tex-math>$varepsilon$</tex-math></inline-formula>\u0000-differential privacy algorithm and \u0000<inline-formula><tex-math>$(varepsilon, delta)$</tex-math></inline-formula>\u0000-differential privacy algorithm. Simulation results show that the safe energy-saving strategy can significantly reduce the energy consumption in CDCs with guaranteeing the security and availability of the important datasets.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 1","pages":"46-60"},"PeriodicalIF":3.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83911053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sustainable Serverless Computing With Cold-Start Optimization and Automatic Workflow Resource Scheduling 利用冷启动优化和自动工作流资源调度实现可持续的无服务器计算
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-09-01 DOI: 10.1109/TSUSC.2023.3311197
Shanxing Pan;Hongyu Zhao;Zinuo Cai;Dongmei Li;Ruhui Ma;Haibing Guan
In recent years, serverless computing has garnered significant attention owing to its high scalability, pay-as-you-go billing model, and efficient resource management provided by cloud service providers. Optimal resource scheduling of serverless computing has become imperative to reduce energy consumption and enable sustainable computing. However, existing serverless platforms encounter two significant challenges: the cold-start problem of containers and the absence of an effective resource allocation strategy for serverless workflows. Existing pre-warm strategies are associated with high computational overhead, while current resource scheduling techniques inadequately account for the intricate structure of serverless workflows. To address these challenges, we present SSC, a pre-warming and automatic resource allocation framework designed explicitly for serverless workflows. We introduce an innovative gradient-based algorithm for pre-warming containers, significantly reducing cold start hit rates. Moreover, leveraging a critical path and priority queue-based algorithm, SSC enables efficient allocation of resources for serverless workflows. In our experimental evaluation, SSC reduces the cold start hit rate by nearly 50% and achieves substantial cost savings of approximately 30%.
近年来,无服务器计算因其高度的可扩展性、"即用即付 "的计费模式以及云服务提供商提供的高效资源管理而备受关注。无服务器计算的最佳资源调度已成为降低能耗和实现可持续计算的当务之急。然而,现有的无服务器平台遇到了两个重大挑战:容器的冷启动问题和无服务器工作流缺乏有效的资源分配策略。现有的预热策略会带来很高的计算开销,而当前的资源调度技术又无法充分考虑无服务器工作流错综复杂的结构。为了应对这些挑战,我们提出了 SSC,这是一个专为无服务器工作流设计的预热和自动资源分配框架。我们引入了一种基于梯度的创新算法来预热容器,大大降低了冷启动命中率。此外,利用基于关键路径和优先级队列的算法,SSC 还能为无服务器工作流高效分配资源。在我们的实验评估中,SSC 将冷启动命中率降低了近 50%,并大幅节省了约 30% 的成本。
{"title":"Sustainable Serverless Computing With Cold-Start Optimization and Automatic Workflow Resource Scheduling","authors":"Shanxing Pan;Hongyu Zhao;Zinuo Cai;Dongmei Li;Ruhui Ma;Haibing Guan","doi":"10.1109/TSUSC.2023.3311197","DOIUrl":"10.1109/TSUSC.2023.3311197","url":null,"abstract":"In recent years, serverless computing has garnered significant attention owing to its high scalability, pay-as-you-go billing model, and efficient resource management provided by cloud service providers. Optimal resource scheduling of serverless computing has become imperative to reduce energy consumption and enable sustainable computing. However, existing serverless platforms encounter two significant challenges: the cold-start problem of containers and the absence of an effective resource allocation strategy for serverless workflows. Existing pre-warm strategies are associated with high computational overhead, while current resource scheduling techniques inadequately account for the intricate structure of serverless workflows. To address these challenges, we present SSC, a pre-warming and automatic resource allocation framework designed explicitly for serverless workflows. We introduce an innovative gradient-based algorithm for pre-warming containers, significantly reducing cold start hit rates. Moreover, leveraging a critical path and priority queue-based algorithm, SSC enables efficient allocation of resources for serverless workflows. In our experimental evaluation, SSC reduces the cold start hit rate by nearly 50% and achieves substantial cost savings of approximately 30%.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 3","pages":"329-340"},"PeriodicalIF":3.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72796524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Sustainable Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1