首页 > 最新文献

IEEE Transactions on Sustainable Computing最新文献

英文 中文
Bringing Energy Efficiency Closer to Application Developers: An Extensible Software Analysis Framework 让应用程序开发人员更接近能源效率:一个可扩展的软件分析框架
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-11-15 DOI: 10.1109/TSUSC.2022.3222409
Charalampos Marantos;Lazaros Papadopoulos;Christos P. Lamprakos;Konstantinos Salapas;Dimitrios Soudris
Green, sustainable and energy-aware computing terms are gaining more and more attention during the last years. The increasing complexity of Internet of Things (IoT) applications makes energy efficiency an important requirement, imposing new challenges to software developers. Software tools capable of providing energy consumption estimations and identifying optimization opportunities are critical during all the phases of application development. This work proposes a novel framework that targets the energy efficiency at application development level. The proposed framework is implemented as a single user-friendly tool-flow, providing a variety of useful features, such as the estimation of the energy consumption without the need of executing the application on the targeted IoT devices and the estimation of potential gains by GPU acceleration on modern heterogeneous IoT architectures. The proposed methodology provides several novel contributions, such as the combination of static analysis and dynamic instrumentation approaches in order to exploit the advantages of both. The framework is evaluated on widely used benchmarks, achieving increased estimation accuracy (more than 90% for similar architectures and more than 72% for the potential use of the GPU). The effectiveness of the framework is further demonstrated using two industrial use-cases achieving an energy reduction from 91% up to 98%.
在过去的几年里,绿色、可持续和能源意识的计算术语越来越受到关注。物联网(IoT)应用程序日益复杂,这使得能源效率成为一项重要要求,给软件开发人员带来了新的挑战。能够提供能耗估计和识别优化机会的软件工具在应用程序开发的所有阶段都至关重要。这项工作提出了一个新的框架,以应用程序开发级别的能源效率为目标。所提出的框架被实现为单个用户友好的工具流,提供了各种有用的功能,例如无需在目标物联网设备上执行应用程序即可估计能耗,以及通过GPU加速在现代异构物联网架构上估计潜在增益。所提出的方法提供了一些新的贡献,例如将静态分析和动态仪器方法相结合,以利用两者的优势。该框架在广泛使用的基准上进行评估,实现了更高的估计精度(类似架构超过90%,GPU的潜在使用超过72%)。使用两个工业用例进一步证明了该框架的有效性,实现了从91%到98%的能源减排。
{"title":"Bringing Energy Efficiency Closer to Application Developers: An Extensible Software Analysis Framework","authors":"Charalampos Marantos;Lazaros Papadopoulos;Christos P. Lamprakos;Konstantinos Salapas;Dimitrios Soudris","doi":"10.1109/TSUSC.2022.3222409","DOIUrl":"https://doi.org/10.1109/TSUSC.2022.3222409","url":null,"abstract":"Green, sustainable and energy-aware computing terms are gaining more and more attention during the last years. The increasing complexity of Internet of Things (IoT) applications makes energy efficiency an important requirement, imposing new challenges to software developers. Software tools capable of providing energy consumption estimations and identifying optimization opportunities are critical during all the phases of application development. This work proposes a novel framework that targets the energy efficiency at application development level. The proposed framework is implemented as a single user-friendly tool-flow, providing a variety of useful features, such as the estimation of the energy consumption without the need of executing the application on the targeted IoT devices and the estimation of potential gains by GPU acceleration on modern heterogeneous IoT architectures. The proposed methodology provides several novel contributions, such as the combination of static analysis and dynamic instrumentation approaches in order to exploit the advantages of both. The framework is evaluated on widely used benchmarks, achieving increased estimation accuracy (more than 90% for similar architectures and more than 72% for the potential use of the GPU). The effectiveness of the framework is further demonstrated using two industrial use-cases achieving an energy reduction from 91% up to 98%.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50421229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application Specific Approximate Behavioral Processor 特定于应用程序的近似行为处理器
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-11-14 DOI: 10.1109/TSUSC.2022.3222117
Qilin Si;Prattay Chowdhury;Rohit Sreekumar;Benjamin Carrion Schafer
Many applications require simple controllers that continuously run the same application. These applications are often found in battery operated embedded systems that require to be ultra-low power (ULP) and are very price sensitive. Some examples include IoT devices of different nature and medical devices. Currently, these systems rely on off-the-shelf general-purpose microprocessors. One of the problems of using these processors, is that not all of the resources are needed for a specific application. Furthermore, because of the regularity of the workloads running on these systems there is a large opportunity to optimize the processor by pruning those unused resources to achieve lower area (cost) and power. Moreover, these processors can be specified at the behavioral level and use High-Level Synthesis (HLS) to generate an efficient Register Transfer Level (RTL) description. This opens a window to additional optimizations as the processor implementation is fully re-optimized during the HLS process. Also, many applications running on these embedded systems tolerate imprecise outputs. These include image processing and digital signal processing (DSP) applications. This opens the door to further optimizations in the context of approximate computing. To address these issues, this work presents a methodology to customize a behavioral RISC processor automatically for a given workload such that its area and power are significantly reduced as compared to the original, general-purpose processor. First, generating a bespoke processor that leads to the exact output as compared to the original general-purpose one and then by approximating it allowing a certain level of error at the output. Compared to previous work that customizes a given processor at the gate netlist only, our proposed method shows significant benefits. In particular, this work shows that raising the level of abstraction reduces the area and power by 78.3% and 70.1% for the exact solution on average, and further reduces the area by an additional 10.0% and 16.5% for the approximate version tolerating a maximum of 10% and 20% output errors respectively.
许多应用程序需要连续运行同一应用程序的简单控制器。这些应用通常存在于需要超低功率(ULP)并且对价格非常敏感的电池操作嵌入式系统中。一些例子包括不同性质的物联网设备和医疗设备。目前,这些系统依赖于现成的通用微处理器。使用这些处理器的问题之一是,并非所有资源都是特定应用程序所需的。此外,由于这些系统上运行的工作负载的规律性,因此有很大的机会通过修剪那些未使用的资源来优化处理器,以实现较低的面积(成本)和功率。此外,这些处理器可以在行为级别上指定,并使用高级综合(HLS)来生成有效的寄存器传输级别(RTL)描述。这为额外的优化打开了一个窗口,因为处理器实现在HLS过程中被完全重新优化。此外,在这些嵌入式系统上运行的许多应用程序都允许不精确的输出。其中包括图像处理和数字信号处理(DSP)应用。这为近似计算的进一步优化打开了大门。为了解决这些问题,这项工作提出了一种方法,可以针对给定的工作负载自动定制行为RISC处理器,使其面积和功率与原始的通用处理器相比显著减少。首先,生成一个定制的处理器,与原始的通用处理器相比,该处理器可以获得精确的输出,然后通过对其进行近似,允许输出出现一定程度的误差。与之前仅在栅极网表处定制给定处理器的工作相比,我们提出的方法显示出显著的优势。特别是,这项工作表明,对于精确的解决方案,提高抽象级别平均将面积和功率分别减少78.3%和70.1%,对于最大允许10%和20%输出误差的近似版本,进一步将面积分别减少10.0%和16.5%。
{"title":"Application Specific Approximate Behavioral Processor","authors":"Qilin Si;Prattay Chowdhury;Rohit Sreekumar;Benjamin Carrion Schafer","doi":"10.1109/TSUSC.2022.3222117","DOIUrl":"https://doi.org/10.1109/TSUSC.2022.3222117","url":null,"abstract":"Many applications require simple controllers that continuously run the same application. These applications are often found in battery operated embedded systems that require to be ultra-low power (ULP) and are very price sensitive. Some examples include IoT devices of different nature and medical devices. Currently, these systems rely on off-the-shelf general-purpose microprocessors. One of the problems of using these processors, is that not all of the resources are needed for a specific application. Furthermore, because of the regularity of the workloads running on these systems there is a large opportunity to optimize the processor by pruning those unused resources to achieve lower area (cost) and power. Moreover, these processors can be specified at the behavioral level and use High-Level Synthesis (HLS) to generate an efficient Register Transfer Level (RTL) description. This opens a window to additional optimizations as the processor implementation is fully re-optimized during the HLS process. Also, many applications running on these embedded systems tolerate imprecise outputs. These include image processing and digital signal processing (DSP) applications. This opens the door to further optimizations in the context of approximate computing. To address these issues, this work presents a methodology to customize a behavioral RISC processor automatically for a given workload such that its area and power are significantly reduced as compared to the original, general-purpose processor. First, generating a bespoke processor that leads to the exact output as compared to the original general-purpose one and then by approximating it allowing a certain level of error at the output. Compared to previous work that customizes a given processor at the gate netlist only, our proposed method shows significant benefits. In particular, this work shows that raising the level of abstraction reduces the area and power by 78.3% and 70.1% for the exact solution on average, and further reduces the area by an additional 10.0% and 16.5% for the approximate version tolerating a maximum of 10% and 20% output errors respectively.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50339533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GaSaver: A Static Analysis Tool for Saving Gas GaSaver:一种用于节约天然气的静态分析工具
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-11-11 DOI: 10.1109/TSUSC.2022.3221444
Ziyi Zhao;Jiliang Li;Zhou Su;Yuyi Wang
Smart contracts are programs running on Ethereum, whose deployment and use require gas. Gas measures the cost of performing specific operations as an index designed to quantify the computing power consumption. Existing unoptimized smart contracts make contract developers and users spend extra gas. To save gas and optimize smart contracts, this paper proposes a new tool named GaSaver for automatically detecting gas-expensive patterns based on Solidity source code. Specifically, we first identify 12 gas-expensive patterns in smart contracts and classify them into three categories: storage-related, judgment-related, and loop-related. Then, we deploy gas-expensive patterns and group them into three levels according to gas waste degree. By conducting extensive experiments on real data sets, we find that 89.68$%$ of the 1172 smart contracts suffer from gas-expensive patterns, 94.27$%$ of 1100 new smart contracts are gas-expensive, and 80.56$%$ of 72 widely used smart contracts are affected. Finally, the experiment results show that the proposed GaSaver can effectively optimize smart contracts. Besides, the proportion of gas-expensive cases in widely used smart contracts is lower than that in the newly released smart contracts.
智能合约是在以太坊上运行的程序,其部署和使用需要天然气。Gas将执行特定操作的成本作为一个指标来衡量,该指标旨在量化计算功耗。现有的未优化智能合约使合约开发者和用户花费额外的汽油。为了节省天然气并优化智能合约,本文提出了一种基于Solidity源代码的自动检测天然气价格模式的新工具GaSaver。具体而言,我们首先识别了智能合约中的12种天然气昂贵模式,并将其分为三类:与存储相关、与判断相关和与循环相关。然后,我们部署气体昂贵模式,并根据气体浪费程度将其分为三个级别。通过对真实数据集进行广泛的实验,我们发现1172个智能合约中有89.68$%$存在天然气昂贵模式,1100个新的智能合约中94.27$%$$$$%$172个广泛使用的智能合约受到影响。最后,实验结果表明,所提出的GaSaver可以有效地优化智能合约。此外,在广泛使用的智能合约中,天然气价格昂贵的案例比例低于最新发布的智能合约。
{"title":"GaSaver: A Static Analysis Tool for Saving Gas","authors":"Ziyi Zhao;Jiliang Li;Zhou Su;Yuyi Wang","doi":"10.1109/TSUSC.2022.3221444","DOIUrl":"https://doi.org/10.1109/TSUSC.2022.3221444","url":null,"abstract":"Smart contracts are programs running on Ethereum, whose deployment and use require gas. Gas measures the cost of performing specific operations as an index designed to quantify the computing power consumption. Existing unoptimized smart contracts make contract developers and users spend extra gas. To save gas and optimize smart contracts, this paper proposes a new tool named GaSaver for automatically detecting gas-expensive patterns based on Solidity source code. Specifically, we first identify 12 gas-expensive patterns in smart contracts and classify them into three categories: storage-related, judgment-related, and loop-related. Then, we deploy gas-expensive patterns and group them into three levels according to gas waste degree. By conducting extensive experiments on real data sets, we find that 89.68\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000 of the 1172 smart contracts suffer from gas-expensive patterns, 94.27\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000 of 1100 new smart contracts are gas-expensive, and 80.56\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000 of 72 widely used smart contracts are affected. Finally, the experiment results show that the proposed GaSaver can effectively optimize smart contracts. Besides, the proportion of gas-expensive cases in widely used smart contracts is lower than that in the newly released smart contracts.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50421233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamic State Estimation for Synchronous Generator With Communication Constraints: An Improved Regularized Particle Filter Approach 具有通信约束的同步发电机动态状态估计:一种改进的正则粒子滤波方法
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-11-10 DOI: 10.1109/TSUSC.2022.3221090
Xingzhen Bai;Feiyu Qin;Leijiao Ge;Lin Zeng;Xinlei Zheng
Accurate acquisition of real-time electromechanical dynamic states of synchronous generators plays an essential role in power systems. The phasor measurement units (PMUs) are widely used in data acquisition of synchronous generator operation parameters, which can capture the dynamic responses of generators. However, distortion of measurement results of synchronous generator operation parameters is inevitable due to various reasons, such as device failure and operating environment interference and so on. Meanwhile, it is hard to transmit gigantic volumes of data to the information center due to limited communication bandwidth. To tackle these challenges, this article proposes a dynamic state estimation method for synchronous generators with event-triggered scheme. The proposed method first establishes a non-linear model to describe the dynamics of generators. Then, a measure-based event-triggering scheme is adopted to schedule the data transmission from the sensor to estimator, thus reducing communication pressure and enhanced resource utilization. Finally, an improved regularized particle filter (IRPF) algorithm is designed to guarantee the estimation performance. To this end, the genetic algorithm is used to optimize the particles sampled by regularized particle filter algorithm, which can solve particle exhaustion problem. The CEPRI7 system is used to verify the performance of the proposed method.
准确获取同步发电机的实时机电动态在电力系统中起着至关重要的作用。相量测量单元(PMU)广泛用于同步发电机运行参数的数据采集,可以捕捉发电机的动态响应。然而,由于设备故障、运行环境干扰等多种原因,同步发电机运行参数的测量结果不可避免地会失真,同时由于通信带宽有限,难以向信息中心传输海量数据。为了应对这些挑战,本文提出了一种基于事件触发方案的同步发电机动态状态估计方法。该方法首先建立了一个非线性模型来描述发电机的动态特性。然后,采用基于度量的事件触发方案来调度从传感器到估计器的数据传输,从而降低了通信压力,提高了资源利用率。最后,设计了一种改进的正则粒子滤波器(IRPF)算法来保证估计性能。为此,利用遗传算法对正则化粒子滤波算法采样的粒子进行优化,解决了粒子耗尽问题。CEPRI7系统用于验证所提出方法的性能。
{"title":"Dynamic State Estimation for Synchronous Generator With Communication Constraints: An Improved Regularized Particle Filter Approach","authors":"Xingzhen Bai;Feiyu Qin;Leijiao Ge;Lin Zeng;Xinlei Zheng","doi":"10.1109/TSUSC.2022.3221090","DOIUrl":"https://doi.org/10.1109/TSUSC.2022.3221090","url":null,"abstract":"Accurate acquisition of real-time electromechanical dynamic states of synchronous generators plays an essential role in power systems. The phasor measurement units (PMUs) are widely used in data acquisition of synchronous generator operation parameters, which can capture the dynamic responses of generators. However, distortion of measurement results of synchronous generator operation parameters is inevitable due to various reasons, such as device failure and operating environment interference and so on. Meanwhile, it is hard to transmit gigantic volumes of data to the information center due to limited communication bandwidth. To tackle these challenges, this article proposes a dynamic state estimation method for synchronous generators with event-triggered scheme. The proposed method first establishes a non-linear model to describe the dynamics of generators. Then, a measure-based event-triggering scheme is adopted to schedule the data transmission from the sensor to estimator, thus reducing communication pressure and enhanced resource utilization. Finally, an improved regularized particle filter (IRPF) algorithm is designed to guarantee the estimation performance. To this end, the genetic algorithm is used to optimize the particles sampled by regularized particle filter algorithm, which can solve particle exhaustion problem. The CEPRI7 system is used to verify the performance of the proposed method.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50339456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Key Agreement Protocol Applying Latin Square for Cloud Data Sharing 为云数据共享应用拉丁方阵的新型密钥协议协议
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-11-10 DOI: 10.1109/TSUSC.2022.3221125
Jian Shen;Tao Zhang;Yi Jiang;Tianqi Zhou;Tiantian Miao
In the cloud computing context, group data sharing, which is highly convenient for people who intend to share data with multiple members of a group, has attracted extensive research attention. However, the question of how data security might be ensured in this scenario, particularly as regards the security of group session keys, still remains an open one. To support secure group data sharing in cloud computing, it is essential to design a key agreement protocol characterized by both high flexibility and efficiency. Accordingly, this paper proposes a key agreement protocol based on Latin Square to reinforce the security of group data sharing. This protocol supports an arbitrary number of members to participate in a group and make equal contributions to generating a common session key among themselves. Compared with previous works, both the communication and the computational complexity in this protocol are significantly reduced. In addition, the services of authentication, key confirmation and fault tolerance are provided, enabling the protocol to resist different types of attacks. The results of both theoretical and experimental analysis indicate that the proposed protocol can efficiently support secure cloud data sharing for large-scale groups.
在云计算背景下,群组数据共享为打算与群组中多个成员共享数据的人提供了极大的便利,已经引起了广泛的研究关注。然而,在这种情况下如何确保数据安全,特别是群组会话密钥的安全,仍然是一个悬而未决的问题。为了支持云计算中安全的群组数据共享,设计一种既灵活又高效的密钥协议至关重要。因此,本文提出了一种基于拉丁方块的密钥协议,以加强群组数据共享的安全性。该协议支持任意数量的成员参与到一个群组中,并为生成他们之间的共同会话密钥做出同等贡献。与之前的作品相比,该协议的通信和计算复杂度都大大降低。此外,该协议还提供了身份验证、密钥确认和容错服务,使其能够抵御不同类型的攻击。理论和实验分析结果表明,所提出的协议能有效支持大规模群体的安全云数据共享。
{"title":"A Novel Key Agreement Protocol Applying Latin Square for Cloud Data Sharing","authors":"Jian Shen;Tao Zhang;Yi Jiang;Tianqi Zhou;Tiantian Miao","doi":"10.1109/TSUSC.2022.3221125","DOIUrl":"10.1109/TSUSC.2022.3221125","url":null,"abstract":"In the cloud computing context, group data sharing, which is highly convenient for people who intend to share data with multiple members of a group, has attracted extensive research attention. However, the question of how data security might be ensured in this scenario, particularly as regards the security of group session keys, still remains an open one. To support secure group data sharing in cloud computing, it is essential to design a key agreement protocol characterized by both high flexibility and efficiency. Accordingly, this paper proposes a key agreement protocol based on Latin Square to reinforce the security of group data sharing. This protocol supports an arbitrary number of members to participate in a group and make equal contributions to generating a common session key among themselves. Compared with previous works, both the communication and the computational complexity in this protocol are significantly reduced. In addition, the services of authentication, key confirmation and fault tolerance are provided, enabling the protocol to resist different types of attacks. The results of both theoretical and experimental analysis indicate that the proposed protocol can efficiently support secure cloud data sharing for large-scale groups.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75272676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CPU Frequency Scaling Optimization in Sustainable Edge Computing 可持续边缘计算中的CPU频率缩放优化
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-10-28 DOI: 10.1109/TSUSC.2022.3217970
Yu Luo;Lina Pu;Chun-Hung Liu
Sustainable edge computing (SEC) is a promising technology that can reduce energy consumption and computing latency for the mobile Internet of Things (IoT). By collecting renewable energy such as solar or wind energy from the environment, a sustainable cloudlet outside the electric grid can provide powerful computing capabilities for resource-constrained mobile IoT devices. In the real world, the density of sustainable energy can vary significantly over time. Therefore, the SEC cloudlet needs to dynamically adjust the clock frequency to balance energy consumption and computing latency. In this paper, we consider the limited energy storage of the cloudlet and the dynamic intensity of renewable energy, and then develop offline optimal CPU frequency scaling policies that (a) maximize the computing power of the cloudlet within a certain period of time, and (b) minimize the execution time given tasks offloaded to the cloudlet. An optimal tightest string policy is proposed to solve the optimization problem. In addition, a dynamic programming (DP) based suboptimal solution is introduced to simplify the practical implementation. How to design an online CPU frequency management strategy is also briefly discussed.
可持续边缘计算(SEC)是一种很有前途的技术,可以降低移动物联网(IoT)的能耗和计算延迟。通过从环境中收集太阳能或风能等可再生能源,电网外的可持续云可以为资源受限的移动物联网设备提供强大的计算能力。在现实世界中,可持续能源的密度可能会随着时间的推移而发生显著变化。因此,SEC cloudlet需要动态调整时钟频率,以平衡能耗和计算延迟。在本文中,我们考虑了cloudlet的有限能量存储和可再生能源的动态强度,然后制定了离线最优CPU频率缩放策略,该策略(a)在一定时间内最大化cloudlet的计算能力,以及(b)在给定任务卸载到cloudlet的情况下最小化执行时间。针对优化问题,提出了一种最优最紧串策略。此外,还引入了一种基于动态规划(DP)的次优解决方案,以简化实际实现。还简要讨论了如何设计一种在线CPU频率管理策略。
{"title":"CPU Frequency Scaling Optimization in Sustainable Edge Computing","authors":"Yu Luo;Lina Pu;Chun-Hung Liu","doi":"10.1109/TSUSC.2022.3217970","DOIUrl":"https://doi.org/10.1109/TSUSC.2022.3217970","url":null,"abstract":"Sustainable edge computing (SEC) is a promising technology that can reduce energy consumption and computing latency for the mobile Internet of Things (IoT). By collecting renewable energy such as solar or wind energy from the environment, a sustainable cloudlet outside the electric grid can provide powerful computing capabilities for resource-constrained mobile IoT devices. In the real world, the density of sustainable energy can vary significantly over time. Therefore, the SEC cloudlet needs to dynamically adjust the clock frequency to balance energy consumption and computing latency. In this paper, we consider the limited energy storage of the cloudlet and the dynamic intensity of renewable energy, and then develop offline optimal CPU frequency scaling policies that (a) maximize the computing power of the cloudlet within a certain period of time, and (b) minimize the execution time given tasks offloaded to the cloudlet. An optimal tightest string policy is proposed to solve the optimization problem. In addition, a dynamic programming (DP) based suboptimal solution is introduced to simplify the practical implementation. How to design an online CPU frequency management strategy is also briefly discussed.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50421230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Design and Analysis of Heuristic Algorithms for Energy-Constrained Task Scheduling With Device-Edge-Cloud Fusion 设备边缘云融合的能量约束任务调度启发式算法设计与分析
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-10-25 DOI: 10.1109/TSUSC.2022.3217014
Keqin Li
Mobile edge computing with device-edge-cloud fusion provides a new type of heterogeneous computing environment. We consider task scheduling with device-edge-cloud fusion (without energy concern) and energy-constrained task scheduling with device-edge-cloud fusion as combinatorial optimization problems. The main contributions of the paper are summarized as follows. We design three heuristic algorithms for task scheduling with device-edge-cloud fusion and prove an asymptotic performance bound. We design one heuristic algorithm for energy-constrained task scheduling with device-edge-cloud fusion, which solves the two subproblems of task scheduling and power allocation in an interleaved way. We derive lower bounds for the optimal solutions for both task scheduling with device-edge-cloud fusion and energy-constrained task scheduling with device-edge-cloud fusion, so that the performance of our heuristic algorithms can be compared with that of an optimal algorithm. We experimentally evaluate the performance of our heuristic algorithms and find that the performance of our heuristic algorithms are very close to that of optimal algorithms. To the best of our knowledge, this is the first paper which studies task scheduling with device-edge-cloud fusion and energy-constrained task scheduling with device-edge-cloud fusion as combinatorial optimization problems and conducts analytical performance evaluation.
移动边缘计算与设备边缘云融合提供了一种新型的异构计算环境。我们将设备边缘云融合的任务调度(无能量问题)和设备边缘云聚变的能量约束任务调度视为组合优化问题。本文的主要贡献总结如下。我们设计了三种用于设备边缘云融合任务调度的启发式算法,并证明了其渐近性能边界。我们设计了一种基于设备边缘云融合的能量约束任务调度启发式算法,该算法以交错的方式解决了任务调度和功率分配两个子问题。我们推导了设备边缘云融合的任务调度和设备边缘云聚变的能量约束任务调度的最优解的下界,以便将我们的启发式算法的性能与最优算法的性能进行比较。我们通过实验评估了启发式算法的性能,发现启发式算法的表现非常接近最优算法。据我们所知,这是第一篇将设备边缘云融合的任务调度和设备边缘云聚变的能量约束任务调度作为组合优化问题进行研究并进行性能分析评估的论文。
{"title":"Design and Analysis of Heuristic Algorithms for Energy-Constrained Task Scheduling With Device-Edge-Cloud Fusion","authors":"Keqin Li","doi":"10.1109/TSUSC.2022.3217014","DOIUrl":"https://doi.org/10.1109/TSUSC.2022.3217014","url":null,"abstract":"Mobile edge computing with device-edge-cloud fusion provides a new type of heterogeneous computing environment. We consider task scheduling with device-edge-cloud fusion (without energy concern) and energy-constrained task scheduling with device-edge-cloud fusion as combinatorial optimization problems. The main contributions of the paper are summarized as follows. We design three heuristic algorithms for task scheduling with device-edge-cloud fusion and prove an asymptotic performance bound. We design one heuristic algorithm for energy-constrained task scheduling with device-edge-cloud fusion, which solves the two subproblems of task scheduling and power allocation in an interleaved way. We derive lower bounds for the optimal solutions for both task scheduling with device-edge-cloud fusion and energy-constrained task scheduling with device-edge-cloud fusion, so that the performance of our heuristic algorithms can be compared with that of an optimal algorithm. We experimentally evaluate the performance of our heuristic algorithms and find that the performance of our heuristic algorithms are very close to that of optimal algorithms. To the best of our knowledge, this is the first paper which studies task scheduling with device-edge-cloud fusion and energy-constrained task scheduling with device-edge-cloud fusion as combinatorial optimization problems and conducts analytical performance evaluation.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50421234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Stochastic Approach to Determine the Optimal Number of Servers for Reliable and Energy Efficient Operation of Data Centers 确定数据中心可靠节能运行的最佳服务器数量的随机方法
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-10-21 DOI: 10.1109/TSUSC.2022.3216350
Kazi Main Uddin Ahmed;Math H. J. Bollen;Manuel Alvarez
The increasing demand of the data center's computational capacity in recent years has introduced new data center operational challenges among others to maintain the service level agreements (SLA) and quality of services (QoS), while at the same time limiting energy consumption. In this paper, a stochastic operational risk assessment approach is presented that estimates the required number of spare servers in a data center considering the risk of servers’ failure in operation since servers define the computational capability of a data center. A reliability index called “risk of computational resource commitment (RCRC)” is introduced that quantifies the probability of having insufficient spare servers due to failures during the operational lead time, and the complement of the RCRC shows the ability of the resources to maintain SLA of a data center. The failure rates of the servers are obtained using a Monte Carlo Simulation with the failure data, published by Google in 2019. The analysis shows that the RCRC reduces with the increasing number of spare servers, while it also stresses the energy efficiency of the data center. The RCRC index could be used in data center operation to avoid overprovisioning of the servers and to limit the number of spare servers in the data center, while creating a suitable balance between QoS and energy consumption of the data centers.
近年来,对数据中心计算能力的日益增长的需求带来了新的数据中心运营挑战,其中包括维护服务水平协议(SLA)和服务质量(QoS),同时限制能源消耗。在本文中,由于服务器定义了数据中心的计算能力,因此提出了一种随机操作风险评估方法,该方法在考虑服务器在操作中故障风险的情况下,估计数据中心所需的备用服务器数量。引入了一个名为“计算资源承诺风险(RCRC)”的可靠性指数,该指数量化了在运营交付周期内由于故障导致备用服务器不足的概率,RCRC的补充表明了资源维护数据中心SLA的能力。服务器的故障率是使用蒙特卡洛模拟和谷歌2019年发布的故障数据获得的。分析表明,RCRC随着备用服务器数量的增加而减少,同时也强调了数据中心的能效。RCRC索引可用于数据中心操作,以避免服务器的过度配置,并限制数据中心中备用服务器的数量,同时在数据中心的QoS和能耗之间建立适当的平衡。
{"title":"A Stochastic Approach to Determine the Optimal Number of Servers for Reliable and Energy Efficient Operation of Data Centers","authors":"Kazi Main Uddin Ahmed;Math H. J. Bollen;Manuel Alvarez","doi":"10.1109/TSUSC.2022.3216350","DOIUrl":"https://doi.org/10.1109/TSUSC.2022.3216350","url":null,"abstract":"The increasing demand of the data center's computational capacity in recent years has introduced new data center operational challenges among others to maintain the service level agreements (SLA) and quality of services (QoS), while at the same time limiting energy consumption. In this paper, a stochastic operational risk assessment approach is presented that estimates the required number of spare servers in a data center considering the risk of servers’ failure in operation since servers define the computational capability of a data center. A reliability index called “risk of computational resource commitment (RCRC)” is introduced that quantifies the probability of having insufficient spare servers due to failures during the operational lead time, and the complement of the RCRC shows the ability of the resources to maintain SLA of a data center. The failure rates of the servers are obtained using a Monte Carlo Simulation with the failure data, published by Google in 2019. The analysis shows that the RCRC reduces with the increasing number of spare servers, while it also stresses the energy efficiency of the data center. The RCRC index could be used in data center operation to avoid overprovisioning of the servers and to limit the number of spare servers in the data center, while creating a suitable balance between QoS and energy consumption of the data centers.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50339565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Efficient Computation Offloading for Static and Dynamic Applications in Hybrid Mobile Edge Cloud System 混合移动边缘云系统中静态和动态应用的节能计算卸载
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-10-21 DOI: 10.1109/TSUSC.2022.3216461
Jing Bi;Kaiyi Zhang;Haitao Yuan;Jia Zhang
As a promising paradigm, mobile edge computing (MEC) provides cloud resources in a network edge to offer low-latency services to mobile devices (MDs). MEC addresses the limited resource and energy issues of MDs by deploying edge servers, which are often located in small base stations. It is a big challenge, however, as how to dynamically connect resource-limited MDs to nearby edge servers, and reduce total energy consumption by MDs, small base stations and a cloud data center (CDC) all in a hybrid system. To tackle the challenge, this work provides an intelligent computation offloading method for both static and dynamic applications among entities in such a hybrid system. The minimization problem of total energy consumption is first formulated as a typical mixed integer non-linear program. An improved meta-heuristic optimization algorithm, named Particle swarm optimization based on Genetic Learning (PGL), is tailored to solve the problem. PGL synergistically take advantage of both the fast convergence of particle swarm optimization, and the global search ability of genetic algorithm. It jointly optimizes task offloading of heterogeneous applications, bandwidth allocation of wireless channels, MDs’ association with small base stations and/or a cloud datacenter, and computing resource allocation of MDs. Numerical results with real-life system configurations prove that PGL outperforms several state-of-the-art peers in terms of total energy consumption of the hybrid system.
作为一种很有前途的模式,移动边缘计算(MEC)在网络边缘提供云资源,为移动设备(MD)提供低延迟服务。MEC通过部署边缘服务器来解决MD的有限资源和能源问题,边缘服务器通常位于小型基站中。然而,如何将资源有限的MD动态连接到附近的边缘服务器,并在混合系统中降低MD、小型基站和云数据中心(CDC)的总能耗,这是一个巨大的挑战。为了应对这一挑战,这项工作为这种混合系统中实体之间的静态和动态应用提供了一种智能计算卸载方法。总能耗最小化问题首先被公式化为一个典型的混合整数非线性规划。针对这一问题,提出了一种改进的元启发式优化算法——基于遗传学习的粒子群优化算法。PGL协同利用了粒子群算法的快速收敛性和遗传算法的全局搜索能力。它联合优化了异构应用程序的任务卸载、无线信道的带宽分配、MD与小型基站和/或云数据中心的关联,以及MD的计算资源分配。实际系统配置的数值结果证明,PGL在混合系统的总能耗方面优于几个最先进的同行。
{"title":"Energy-Efficient Computation Offloading for Static and Dynamic Applications in Hybrid Mobile Edge Cloud System","authors":"Jing Bi;Kaiyi Zhang;Haitao Yuan;Jia Zhang","doi":"10.1109/TSUSC.2022.3216461","DOIUrl":"https://doi.org/10.1109/TSUSC.2022.3216461","url":null,"abstract":"As a promising paradigm, mobile edge computing (MEC) provides cloud resources in a network edge to offer low-latency services to mobile devices (MDs). MEC addresses the limited resource and energy issues of MDs by deploying edge servers, which are often located in small base stations. It is a big challenge, however, as how to dynamically connect resource-limited MDs to nearby edge servers, and reduce total energy consumption by MDs, small base stations and a cloud data center (CDC) all in a hybrid system. To tackle the challenge, this work provides an intelligent computation offloading method for both static and dynamic applications among entities in such a hybrid system. The minimization problem of total energy consumption is first formulated as a typical mixed integer non-linear program. An improved meta-heuristic optimization algorithm, named \u0000<underline>P</u>\u0000article swarm optimization based on \u0000<underline>G</u>\u0000enetic \u0000<underline>L</u>\u0000earning (PGL), is tailored to solve the problem. PGL synergistically take advantage of both the fast convergence of particle swarm optimization, and the global search ability of genetic algorithm. It jointly optimizes task offloading of heterogeneous applications, bandwidth allocation of wireless channels, MDs’ association with small base stations and/or a cloud datacenter, and computing resource allocation of MDs. Numerical results with real-life system configurations prove that PGL outperforms several state-of-the-art peers in terms of total energy consumption of the hybrid system.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50421236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Slack-Aware Packet Approximation for Energy-Efficient Network-on-Chips 芯片上节能网络的Slack感知分组近似
IF 3.9 3区 计算机科学 Q1 Mathematics Pub Date : 2022-10-19 DOI: 10.1109/TSUSC.2022.3213469
Yuechen Chen;Ahmed Louri;Shanshan Liu;Fabrizio Lombardi
Network-on-Chips (NoCs) are the standard on-chip communication fabrics for connecting cores, caches, and memory controllers in multi/many-core systems. With the increase in communication load introduced by emerging parallel computing applications, on-chip communication is becoming more costly than computation in terms of energy consumption. This paper contributes to existing research on approximate communication by proposing a slack-aware packet approximation technique to reduce the energy consumed by NoCs for sustainable parallel computation. The proposed approximation technique lowers both the execution time and NoC power consumption by reducing the packet size based on slack. The slack is the number of cycles by which a packet can be delayed in the network with no effect on execution time. Thus, low-slack packets are considered critical to system performance, and prioritizing these packets during the transmission will significantly reduce execution time. The proposed technique includes a slack-aware control policy to identify low-slack packets and accelerates these packets using two packet approximation mechanisms, namely, an in-network approximation (INAP) and a network interface approximation (NIAP). INAP mechanism prioritizes low-slack packets during the arbitration phase of the router by approximating packets with high-slack. NIAP mechanism reduces the latency of the network links and switch traversals by truncating data for the low-slack packets. An approximate network interface and router are implemented to support the proposed technique with lightweight packet approximation hardware for lower power consumption and execution time. Cycle-accurate simulations using the AxBench and PARSEC benchmark suites show that the proposed approximate communication technique achieves reductions of up to 24% in execution time and 38% in energy consumption with 1.1% less accuracy loss on average compared to existing approximate communication techniques.
片上网络(NoCs)是用于连接多核/多核系统中的核心、缓存和内存控制器的标准片上通信结构。随着新兴的并行计算应用引入的通信负载的增加,就能耗而言,片上通信变得比计算更昂贵。本文提出了一种松弛感知分组近似技术,以减少用于可持续并行计算的NoC所消耗的能量,从而为现有的近似通信研究做出了贡献。所提出的近似技术通过减少基于松弛的分组大小来降低执行时间和NoC功耗。松弛是指在不影响执行时间的情况下,数据包可以在网络中延迟的周期数。因此,低松弛分组被认为对系统性能至关重要,并且在传输期间对这些分组进行优先级排序将显著减少执行时间。所提出的技术包括松弛感知控制策略来识别低松弛分组,并使用两种分组近似机制来加速这些分组,即网络内近似(INAP)和网络接口近似(NIAP)。INAP机制在路由器的仲裁阶段通过近似具有高松弛的分组来对低松弛分组进行优先级排序。NIAP机制通过截断低松弛数据包的数据来减少网络链路和交换机遍历的延迟。实现了一个近似网络接口和路由器,以支持所提出的技术,并使用轻量级的分组近似硬件来降低功耗和执行时间。使用AxBench和PARSEC基准套件进行的循环精确模拟表明,与现有近似通信技术相比,所提出的近似通信技术的执行时间和能耗分别减少了24%和38%,平均精度损失减少了1.1%。
{"title":"Slack-Aware Packet Approximation for Energy-Efficient Network-on-Chips","authors":"Yuechen Chen;Ahmed Louri;Shanshan Liu;Fabrizio Lombardi","doi":"10.1109/TSUSC.2022.3213469","DOIUrl":"https://doi.org/10.1109/TSUSC.2022.3213469","url":null,"abstract":"Network-on-Chips (NoCs) are the standard on-chip communication fabrics for connecting cores, caches, and memory controllers in multi/many-core systems. With the increase in communication load introduced by emerging parallel computing applications, on-chip communication is becoming more costly than computation in terms of energy consumption. This paper contributes to existing research on approximate communication by proposing a slack-aware packet approximation technique to reduce the energy consumed by NoCs for sustainable parallel computation. The proposed approximation technique lowers both the execution time and NoC power consumption by reducing the packet size based on slack. The slack is the number of cycles by which a packet can be delayed in the network with no effect on execution time. Thus, low-slack packets are considered critical to system performance, and prioritizing these packets during the transmission will significantly reduce execution time. The proposed technique includes a slack-aware control policy to identify low-slack packets and accelerates these packets using two packet approximation mechanisms, namely, an in-network approximation (INAP) and a network interface approximation (NIAP). INAP mechanism prioritizes low-slack packets during the arbitration phase of the router by approximating packets with high-slack. NIAP mechanism reduces the latency of the network links and switch traversals by truncating data for the low-slack packets. An approximate network interface and router are implemented to support the proposed technique with lightweight packet approximation hardware for lower power consumption and execution time. Cycle-accurate simulations using the AxBench and PARSEC benchmark suites show that the proposed approximate communication technique achieves reductions of up to 24% in execution time and 38% in energy consumption with 1.1% less accuracy loss on average compared to existing approximate communication techniques.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50296552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Sustainable Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1