首页 > 最新文献

IEEE Transactions on Sustainable Computing最新文献

英文 中文
SCROOGEVM: Boosting Cloud Resource Utilization With Dynamic Oversubscription SCROOGEVM:利用动态超额订购提高云资源利用率
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-23 DOI: 10.1109/TSUSC.2024.3369333
Pierre Jacquet;Thomas Ledoux;Romain Rouvoy
Despite continuous improvements, cloud physical resources remain underused, hence severely impacting the efficiency of these infrastructures at large. To overcome this inefficiency, Infrastructure-as-a-Service (IaaS) providers usually compensate for oversized Virtual Machines (VMs) by offering more virtual resources than are physically available on a host. However, this technique—known as oversubscription—may hinder performances when a statically-defined oversubscription ratio results in resource contention of hosted VMs. Therefore, instead of setting a static and cluster-wide ratio, this article studies how a greedy increase of the oversubscription ratio per Physical Machine (PM) and resources type can preserve performance goals. Keeping performance unchanged allows our contribution to be more realistically adopted by production-scale IaaS infrastructures. This contribution, named ScroogeVM, leverages the detection of PM stability to carefully increase the associated oversubscription ratios. Based on metrics shared by public cloud providers, we investigate the impact of resource oversubscription on performance degradation. Subsequently, we conduct a comparative analysis of ScroogeVM with state-of-the-art oversubscription computations. The results demonstrate that our approach outperforms existing methods by leveraging the presence of long-lasting VMs, while avoiding live migration penalties and performance impacts for stakeholders.
尽管不断改进,但云物理资源仍未得到充分利用,从而严重影响了这些基础设施的整体效率。为了克服这种效率低下的问题,基础设施即服务(IaaS)提供商通常通过提供比主机上物理可用资源更多的虚拟资源来补偿过大的虚拟机(VM)。然而,当静态定义的超额认购比率导致托管虚拟机出现资源争用时,这种被称为超额认购的技术可能会影响性能。因此,本文研究了如何通过贪婪地提高每台物理机(PM)和每种资源类型的超量订购比例来保持性能目标,而不是设置一个静态的、全集群范围的比例。在保持性能不变的情况下,我们的贡献可以更现实地应用于生产规模的 IaaS 基础设施。这项贡献被命名为 ScroogeVM,它利用对 PM 稳定性的检测,谨慎地提高相关的超额订购比率。基于公共云提供商共享的指标,我们研究了资源超额订购对性能下降的影响。随后,我们对 ScroogeVM 与最先进的超额订购计算方法进行了比较分析。结果表明,我们的方法利用了持久虚拟机的存在,同时避免了实时迁移惩罚和对利益相关者的性能影响,因此优于现有方法。
{"title":"SCROOGEVM: Boosting Cloud Resource Utilization With Dynamic Oversubscription","authors":"Pierre Jacquet;Thomas Ledoux;Romain Rouvoy","doi":"10.1109/TSUSC.2024.3369333","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3369333","url":null,"abstract":"Despite continuous improvements, cloud physical resources remain underused, hence severely impacting the efficiency of these infrastructures at large. To overcome this inefficiency, Infrastructure-as-a-Service (IaaS) providers usually compensate for oversized Virtual Machines (VMs) by offering more virtual resources than are physically available on a host. However, this technique—known as \u0000<italic>oversubscription</i>\u0000—may hinder performances when a statically-defined oversubscription ratio results in resource contention of hosted VMs. Therefore, instead of setting a static and cluster-wide ratio, this article studies how a greedy increase of the oversubscription ratio per Physical Machine (PM) and resources type can preserve performance goals. Keeping performance unchanged allows our contribution to be more realistically adopted by production-scale IaaS infrastructures. This contribution, named \u0000<sc>ScroogeVM</small>\u0000, leverages the detection of PM stability to carefully increase the associated oversubscription ratios. Based on metrics shared by public cloud providers, we investigate the impact of resource oversubscription on performance degradation. Subsequently, we conduct a comparative analysis of \u0000<sc>ScroogeVM</small>\u0000 with state-of-the-art oversubscription computations. The results demonstrate that our approach outperforms existing methods by leveraging the presence of long-lasting VMs, while avoiding live migration penalties and performance impacts for stakeholders.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 5","pages":"754-765"},"PeriodicalIF":3.0,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OceanCrowd: Vessel Trajectory Data-Based Participant Selection for Mobile Crowd Sensing in Ocean Observation 海洋人群:基于船舶轨迹数据的海洋观测移动人群感知参与者选择
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-23 DOI: 10.1109/TSUSC.2024.3369092
Shuai Guo;Menglei Xia;Huanqun Xue;Shuang Wang;Chao Liu
With the in-depth study of the internal process mechanism of the global ocean by oceanographers, traditional ocean observation methods have been unable to meet the new observation requirements. In order to achieve a low-cost ocean observation mechanism with high spatio-temporal resolution, this paper introduces mobile crowd sensing technology into the field of ocean observation. First, a Transformer-based vessel trajectory prediction algorithm is proposed, which can monitor the location and movement trajectory of vessel in real time. Second, the participant selection algorithm in mobile crowd sensing is studied, and based on the trajectory prediction algorithm, a dynamic participant selection algorithm for ocean mobile crowd sensing is proposed by combining it with the discrete particle swarm optimization (DPSO) algorithm. Third, a coverage estimation algorithm is designed to estimate the coverage of the selection scheme. Finally, the spatio-temporal resolution of the vessel's driving trajectory is analyzed through experiments, which verifies the effectiveness of the algorithm and comprehensively confirms the feasibility of mobile crowd sensing in the field of ocean observation.
随着海洋学家对全球海洋内部过程机制研究的深入,传统的海洋观测方法已经不能满足新的观测要求。为了实现低成本、高时空分辨率的海洋观测机制,本文将移动人群传感技术引入海洋观测领域。首先,提出了一种基于变压器的船舶轨迹预测算法,该算法可以实时监测船舶的位置和运动轨迹;其次,研究了移动人群感知中的参与者选择算法,在轨迹预测算法的基础上,将其与离散粒子群优化(DPSO)算法相结合,提出了一种面向海洋移动人群感知的动态参与者选择算法。第三,设计了覆盖估计算法,对选择方案的覆盖进行估计。最后,通过实验分析了船舶行驶轨迹的时空分辨率,验证了算法的有效性,全面证实了移动人群感知在海洋观测领域的可行性。
{"title":"OceanCrowd: Vessel Trajectory Data-Based Participant Selection for Mobile Crowd Sensing in Ocean Observation","authors":"Shuai Guo;Menglei Xia;Huanqun Xue;Shuang Wang;Chao Liu","doi":"10.1109/TSUSC.2024.3369092","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3369092","url":null,"abstract":"With the in-depth study of the internal process mechanism of the global ocean by oceanographers, traditional ocean observation methods have been unable to meet the new observation requirements. In order to achieve a low-cost ocean observation mechanism with high spatio-temporal resolution, this paper introduces mobile crowd sensing technology into the field of ocean observation. First, a Transformer-based vessel trajectory prediction algorithm is proposed, which can monitor the location and movement trajectory of vessel in real time. Second, the participant selection algorithm in mobile crowd sensing is studied, and based on the trajectory prediction algorithm, a dynamic participant selection algorithm for ocean mobile crowd sensing is proposed by combining it with the discrete particle swarm optimization (DPSO) algorithm. Third, a coverage estimation algorithm is designed to estimate the coverage of the selection scheme. Finally, the spatio-temporal resolution of the vessel's driving trajectory is analyzed through experiments, which verifies the effectiveness of the algorithm and comprehensively confirms the feasibility of mobile crowd sensing in the field of ocean observation.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 6","pages":"889-901"},"PeriodicalIF":3.0,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain for Energy Credits and Certificates: A Comprehensive Review 用于能源积分和证书的区块链:全面回顾
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-16 DOI: 10.1109/TSUSC.2024.3366502
Syed Muhammad Danish;Kaiwen Zhang;Fatima Amara;Juan Carlos Oviedo Cepeda;Luis Fernando Rueda Vasquez;Tom Marynowski
Climate change is a major issue that has disastrous impacts on the environment through different causes like the greenhouse gas (GHG) emission. Many energy utilities around the world intend to reduce GHG emissions by promoting different systems including carbon emission trading (CET), renewable energy certificates (RECs), and tradable white certificates (TWCs). However, these systems are centralized, highly regulated, and operationally expensive and do not meet transparency, trust and security requirements. Accordingly, GHG emission reduction schemes are gradually moving towards blockchain-based solutions due to their underpinning characteristics including decentralization, transparency, anonymity, and trust (independent from third parties). This paper performs a comprehensive investigation into the blockchain technology, deployed for GHG emission reduction plans. It explores existing blockchain solutions along with their associated challenges to effectively uncover their potentials. As a result, this study suggests possible lines of research for future enhancements of blockchain systems particularly their incorporation in GHG emission reduction.
气候变化是一个重大问题,它通过温室气体排放等不同原因对环境造成灾难性影响。全球许多能源公用事业公司打算通过推广不同的系统来减少温室气体排放,包括碳排放交易(CET)、可再生能源证书(RECs)和可交易白色证书(TWCs)。然而,这些系统都是集中式的,受到高度管制,运行成本高,而且不符合透明度、信任度和安全性的要求。因此,温室气体减排计划正逐渐转向基于区块链的解决方案,因为区块链具有去中心化、透明、匿名和信任(独立于第三方)等基本特征。本文对用于温室气体减排计划的区块链技术进行了全面调查。它探讨了现有的区块链解决方案及其相关挑战,以有效发掘其潜力。因此,本研究为区块链系统未来的改进,特别是将其纳入温室气体减排提出了可能的研究方向。
{"title":"Blockchain for Energy Credits and Certificates: A Comprehensive Review","authors":"Syed Muhammad Danish;Kaiwen Zhang;Fatima Amara;Juan Carlos Oviedo Cepeda;Luis Fernando Rueda Vasquez;Tom Marynowski","doi":"10.1109/TSUSC.2024.3366502","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3366502","url":null,"abstract":"Climate change is a major issue that has disastrous impacts on the environment through different causes like the greenhouse gas (GHG) emission. Many energy utilities around the world intend to reduce GHG emissions by promoting different systems including carbon emission trading (CET), renewable energy certificates (RECs), and tradable white certificates (TWCs). However, these systems are centralized, highly regulated, and operationally expensive and do not meet transparency, trust and security requirements. Accordingly, GHG emission reduction schemes are gradually moving towards blockchain-based solutions due to their underpinning characteristics including decentralization, transparency, anonymity, and trust (independent from third parties). This paper performs a comprehensive investigation into the blockchain technology, deployed for GHG emission reduction plans. It explores existing blockchain solutions along with their associated challenges to effectively uncover their potentials. As a result, this study suggests possible lines of research for future enhancements of blockchain systems particularly their incorporation in GHG emission reduction.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 5","pages":"727-739"},"PeriodicalIF":3.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRLCAP: Runtime GPU Frequency Capping With Deep Reinforcement Learning DRLCAP:运行时 GPU 频率上限与深度强化学习
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-06 DOI: 10.1109/TSUSC.2024.3362697
Yiming Wang;Meng Hao;Hui He;Weizhe Zhang;Qiuyuan Tang;Xiaoyang Sun;Zheng Wang
Power and energy consumption is the limiting factor of modern computing systems. As the GPU becomes a mainstream computing device, power management for GPUs becomes increasingly important. Current works focus on GPU kernel-level power management, with challenges in portability due to architecture-specific considerations. We present DRLCap, a general runtime power management framework intended to support power management across various GPU architectures. It periodically monitors system-level information to dynamically detect program phase changes and model the workload and GPU system behavior. This elimination from kernel-specific constraints enhances adaptability and responsiveness. The framework leverages dynamic GPU frequency capping, which is the most widely used power knob, to control the power consumption. DRLCap employs deep reinforcement learning (DRL) to adapt to the changing of program phases by automatically adjusting its power policy through online learning, aiming to reduce the GPU power consumption without significantly compromising the application performance. We evaluate DRLCap on three NVIDIA and one AMD GPU architectures. Experimental results show that DRLCap improves prior GPU power optimization strategies by a large margin. On average, it reduces the GPU energy consumption by 22% with less than 3% performance slowdown on NVIDIA GPUs. This translates to a 20% improvement in the energy efficiency measured by the energy-delay product (EDP) over the NVIDIA default GPU power management strategy. For the AMD GPU architecture, DRLCap saves energy consumption by 10%, on average, with a 4% percentage loss, and improves energy efficiency by 8%.
功耗和能耗是现代计算系统的限制因素。随着 GPU 成为主流计算设备,GPU 的电源管理变得越来越重要。目前的工作主要集中在 GPU 内核级电源管理上,由于特定架构的考虑,在可移植性方面存在挑战。我们提出的 DRLCap 是一个通用运行时电源管理框架,旨在支持各种 GPU 架构的电源管理。它定期监控系统级信息,动态检测程序阶段的变化,并对工作负载和 GPU 系统行为进行建模。这种消除特定于内核的限制的方法增强了适应性和响应能力。该框架利用动态 GPU 频率上限(这是最广泛使用的功耗旋钮)来控制功耗。DRLCap 采用深度强化学习(DRL)技术,通过在线学习自动调整功耗策略,以适应程序阶段的变化,从而在不明显影响应用程序性能的情况下降低 GPU 功耗。我们在三种英伟达(NVIDIA)和一种 AMD GPU 架构上对 DRLCap 进行了评估。实验结果表明,DRLCap 大大改进了之前的 GPU 功耗优化策略。在英伟达™(NVIDIA®)图形处理器上,DRLCap 平均降低了 22% 的 GPU 能耗,而性能降低不到 3%。与英伟达™(NVIDIA®)默认的 GPU 电源管理策略相比,这意味着以能量-延迟积(EDP)衡量的能效提高了 20%。对于 AMD GPU 架构,DRLCap 平均可节省 10% 的能耗,损失百分比为 4%,能效提高了 8%。
{"title":"DRLCAP: Runtime GPU Frequency Capping With Deep Reinforcement Learning","authors":"Yiming Wang;Meng Hao;Hui He;Weizhe Zhang;Qiuyuan Tang;Xiaoyang Sun;Zheng Wang","doi":"10.1109/TSUSC.2024.3362697","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3362697","url":null,"abstract":"Power and energy consumption is the limiting factor of modern computing systems. As the GPU becomes a mainstream computing device, power management for GPUs becomes increasingly important. Current works focus on GPU kernel-level power management, with challenges in portability due to architecture-specific considerations. We present \u0000<sc>DRLCap</small>\u0000, a general runtime power management framework intended to support power management across various GPU architectures. It periodically monitors system-level information to dynamically detect program phase changes and model the workload and GPU system behavior. This elimination from kernel-specific constraints enhances adaptability and responsiveness. The framework leverages dynamic GPU frequency capping, which is the most widely used power knob, to control the power consumption. \u0000<sc>DRLCap</small>\u0000 employs deep reinforcement learning (DRL) to adapt to the changing of program phases by automatically adjusting its power policy through online learning, aiming to reduce the GPU power consumption without significantly compromising the application performance. We evaluate \u0000<sc>DRLCap</small>\u0000 on three NVIDIA and one AMD GPU architectures. Experimental results show that \u0000<sc>DRLCap</small>\u0000 improves prior GPU power optimization strategies by a large margin. On average, it reduces the GPU energy consumption by 22% with less than 3% performance slowdown on NVIDIA GPUs. This translates to a 20% improvement in the energy efficiency measured by the energy-delay product (EDP) over the NVIDIA default GPU power management strategy. For the AMD GPU architecture, \u0000<sc>DRLCap</small>\u0000 saves energy consumption by 10%, on average, with a 4% percentage loss, and improves energy efficiency by 8%.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 5","pages":"712-726"},"PeriodicalIF":3.0,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Outsourced Data Audit Scheme for Merkle Hash Grid-Based Fog Storage With Privacy-Preserving 具有隐私保护功能的基于 Merkle 哈希网格的雾存储动态外包数据审计方案
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-05 DOI: 10.1109/TSUSC.2024.3362074
Ke Gu;XingQiang Wang;Xiong Li
The security of fog computing has been researched and concerned with its development, where malicious attacks pose a greater threat to distributed data storage based on fog computing. Also, the rapid increasing on the number of terminal devices has raised the importance of fog computing-based distributed data storage. In response to this demand, it is essential to establish a secure and privacy-preserving distributed data auditing method that enables security protection of stored data and effective control over identities of auditors. In this paper, we propose a dynamic outsourced data audit scheme for Merkle hash grid-based fog storage with privacy-preserving, where fog servers are used to undertake partial outsourced computation and data storage. Our scheme can provide the function of privacy-preserving for outsourced data by blinding original stored data, and supports data owners to define their auditing access policies by the linear secret-sharing scheme to control the identities of auditors. Further, the construction of Merkle hash grid is used to improve the efficiency of dynamic data operations. Also, a server locating approach is proposed to enable the third-part auditor to identify specific malicious data fog servers within distributed data storage. Under the proposed security model, the security of our scheme can be proved, which can further provide collusion resistance and privacy-preserving for outsourced data. Additionally, both theoretical and experimental evaluations illustrate the efficiency of our proposed scheme.
随着雾计算的发展,人们对雾计算的安全性进行了研究和关注,其中恶意攻击对基于雾计算的分布式数据存储构成了更大的威胁。此外,终端设备数量的快速增长也提高了基于雾计算的分布式数据存储的重要性。针对这一需求,必须建立一种安全且保护隐私的分布式数据审计方法,以实现对存储数据的安全保护和对审计人员身份的有效控制。本文提出了一种基于 Merkle 哈希网格的雾存储动态外包数据审计方案,利用雾服务器承担部分外包计算和数据存储,具有隐私保护功能。我们的方案可以通过屏蔽原始存储数据来为外包数据提供隐私保护功能,并支持数据所有者通过线性秘密共享方案来定义审计访问策略,从而控制审计人员的身份。此外,还利用 Merkle 哈希网格的构建提高了动态数据操作的效率。同时,还提出了一种服务器定位方法,使第三部分审计员能够识别分布式数据存储中特定的恶意数据雾服务器。在所提出的安全模型下,我们的方案的安全性得到了证明,可以进一步为外包数据提供抗串通和隐私保护功能。此外,理论和实验评估都说明了我们提出的方案的效率。
{"title":"Dynamic Outsourced Data Audit Scheme for Merkle Hash Grid-Based Fog Storage With Privacy-Preserving","authors":"Ke Gu;XingQiang Wang;Xiong Li","doi":"10.1109/TSUSC.2024.3362074","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3362074","url":null,"abstract":"The security of fog computing has been researched and concerned with its development, where malicious attacks pose a greater threat to distributed data storage based on fog computing. Also, the rapid increasing on the number of terminal devices has raised the importance of fog computing-based distributed data storage. In response to this demand, it is essential to establish a secure and privacy-preserving distributed data auditing method that enables security protection of stored data and effective control over identities of auditors. In this paper, we propose a dynamic outsourced data audit scheme for Merkle hash grid-based fog storage with privacy-preserving, where fog servers are used to undertake partial outsourced computation and data storage. Our scheme can provide the function of privacy-preserving for outsourced data by blinding original stored data, and supports data owners to define their auditing access policies by the linear secret-sharing scheme to control the identities of auditors. Further, the construction of Merkle hash grid is used to improve the efficiency of dynamic data operations. Also, a server locating approach is proposed to enable the third-part auditor to identify specific malicious data fog servers within distributed data storage. Under the proposed security model, the security of our scheme can be proved, which can further provide collusion resistance and privacy-preserving for outsourced data. Additionally, both theoretical and experimental evaluations illustrate the efficiency of our proposed scheme.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"695-711"},"PeriodicalIF":3.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Battery-Aware Workflow Scheduling for Portable Heterogeneous Computing 便携式异构计算的电池感知工作流调度
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-01 DOI: 10.1109/TSUSC.2024.3360975
Fu Jiang;Yaoxin Xia;Lisen Yan;Weirong Liu;Xiaoyong Zhang;Heng Li;Jun Peng
Battery degradation is a main hinder to extend the persistent lifespan of the portable heterogeneous computing device. Excessive energy consumption and prominent current fluctuations can lead to a sharp decline of battery endurance. To address this issue, a battery-aware workflow scheduling algorithm is proposed to maximize the battery lifetime and release the computing potential of the device fully. First, a dynamic optimal budget strategy is developed to select the highest cost-effectiveness processors to meet the deadline of each task, accelerating the budget optimization by incorporating deep neural network. Second, an integer-programming greedy strategy is utilized to determine the start time of each task, minimizing the fluctuation of the battery supply current to mitigate the battery degradation. Finally, a long-term operation experiment and Monte Carlo experiments are performed on the battery simulator, SLIDE. The experimental results under real operating conditions for more than 1800 hours validate that the proposed scheduling algorithm can effectively extend the battery life by 7.31%-8.23%. The results on various parallel workflows illustrate that the proposed algorithm has comparable performance with speed improvement over the integer programming method.
电池衰减是延长便携式异构计算设备持久寿命的主要障碍。过多的能耗和突出的电流波动会导致电池续航能力急剧下降。为解决这一问题,我们提出了一种电池感知工作流调度算法,以最大限度地延长电池寿命,充分释放设备的计算潜能。首先,开发了一种动态优化预算策略,以选择性价比最高的处理器来满足每个任务的截止日期要求,并通过深度神经网络加速预算优化。其次,利用整数编程贪婪策略确定每项任务的启动时间,最大限度地减少电池供电电流的波动,以缓解电池衰减。最后,在电池模拟器 SLIDE 上进行了长期运行实验和蒙特卡罗实验。在实际运行条件下超过 1800 小时的实验结果验证了所提出的调度算法能有效延长电池寿命 7.31%-8.23% 。各种并行工作流的结果表明,与整数编程方法相比,所提出的算法性能相当,速度也有所提高。
{"title":"Battery-Aware Workflow Scheduling for Portable Heterogeneous Computing","authors":"Fu Jiang;Yaoxin Xia;Lisen Yan;Weirong Liu;Xiaoyong Zhang;Heng Li;Jun Peng","doi":"10.1109/TSUSC.2024.3360975","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3360975","url":null,"abstract":"Battery degradation is a main hinder to extend the persistent lifespan of the portable heterogeneous computing device. Excessive energy consumption and prominent current fluctuations can lead to a sharp decline of battery endurance. To address this issue, a battery-aware workflow scheduling algorithm is proposed to maximize the battery lifetime and release the computing potential of the device fully. First, a dynamic optimal budget strategy is developed to select the highest cost-effectiveness processors to meet the deadline of each task, accelerating the budget optimization by incorporating deep neural network. Second, an integer-programming greedy strategy is utilized to determine the start time of each task, minimizing the fluctuation of the battery supply current to mitigate the battery degradation. Finally, a long-term operation experiment and Monte Carlo experiments are performed on the battery simulator, SLIDE. The experimental results under real operating conditions for more than 1800 hours validate that the proposed scheduling algorithm can effectively extend the battery life by 7.31%-8.23%. The results on various parallel workflows illustrate that the proposed algorithm has comparable performance with speed improvement over the integer programming method.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"677-694"},"PeriodicalIF":3.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CloudProphet: A Machine Learning-Based Performance Prediction for Public Clouds 云预言家(CloudProphet):基于机器学习的公有云性能预测
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-29 DOI: 10.1109/TSUSC.2024.3359325
Darong Huang;Luis Costero;Ali Pahlevan;Marina Zapater;David Atienza
Computing servers have played a key role in developing and processing emerging compute-intensive applications in recent years. Consolidating multiple virtual machines (VMs) inside one server to run various applications introduces severe competence for limited resources among VMs. Many techniques such as VM scheduling and resource provisioning are proposed to maximize the cost-efficiency of the computing servers while alleviating the performance inference between VMs. However, these management techniques require accurate performance prediction of the application running inside the VM, which is challenging to get in the public cloud due to the black-box nature of the VMs. From this perspective, this paper proposes a novel machine learning-based performance prediction approach for applications running in the cloud. To achieve high-accuracy predictions for black-box VMs, the proposed method first identifies the running application inside the virtual machine. It then selects highly correlated runtime metrics as the input of the machine learning approach to accurately predict the performance level of the cloud application. Experimental results with state-of-the-art cloud benchmarks demonstrate that our proposed method outperforms existing prediction methods by more than 2× in terms of the worst prediction error. In addition, we successfully tackle the challenge of performance prediction for applications with variable workloads by introducing the performance degradation index, which other comparison methods fail to consider. The workflow versatility of the proposed approach has been verified with different modern servers and VM configurations.
近年来,计算服务器在开发和处理新兴计算密集型应用方面发挥了关键作用。在一台服务器中整合多个虚拟机(VM)以运行各种应用,会导致虚拟机之间对有限资源的严重争夺。人们提出了许多技术,如虚拟机调度和资源调配,以最大限度地提高计算服务器的成本效益,同时减轻虚拟机之间的性能差异。然而,这些管理技术需要对虚拟机内部运行的应用程序进行准确的性能预测,而由于虚拟机的黑盒性质,要在公共云中实现这一点具有挑战性。从这个角度出发,本文针对云中运行的应用程序提出了一种基于机器学习的新型性能预测方法。为实现对黑盒虚拟机的高精度预测,本文提出的方法首先要识别虚拟机内运行的应用程序。然后,它选择高度相关的运行时指标作为机器学习方法的输入,以准确预测云应用程序的性能水平。使用最先进的云基准进行的实验结果表明,我们提出的方法在最差预测误差方面比现有预测方法高出 2 倍以上。此外,我们还引入了性能退化指数,成功地解决了工作负载可变的应用程序性能预测难题,而其他比较方法却没有考虑到这一点。建议方法的工作流通用性已在不同的现代服务器和虚拟机配置中得到验证。
{"title":"CloudProphet: A Machine Learning-Based Performance Prediction for Public Clouds","authors":"Darong Huang;Luis Costero;Ali Pahlevan;Marina Zapater;David Atienza","doi":"10.1109/TSUSC.2024.3359325","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3359325","url":null,"abstract":"Computing servers have played a key role in developing and processing emerging compute-intensive applications in recent years. Consolidating multiple virtual machines (VMs) inside one server to run various applications introduces severe competence for limited resources among VMs. Many techniques such as VM scheduling and resource provisioning are proposed to maximize the cost-efficiency of the computing servers while alleviating the performance inference between VMs. However, these management techniques require accurate performance prediction of the application running inside the VM, which is challenging to get in the public cloud due to the black-box nature of the VMs. From this perspective, this paper proposes a novel machine learning-based performance prediction approach for applications running in the cloud. To achieve high-accuracy predictions for black-box VMs, the proposed method first identifies the running application inside the virtual machine. It then selects highly correlated runtime metrics as the input of the machine learning approach to accurately predict the performance level of the cloud application. Experimental results with state-of-the-art cloud benchmarks demonstrate that our proposed method outperforms existing prediction methods by more than 2× in terms of the worst prediction error. In addition, we successfully tackle the challenge of performance prediction for applications with variable workloads by introducing the performance degradation index, which other comparison methods fail to consider. The workflow versatility of the proposed approach has been verified with different modern servers and VM configurations.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"661-676"},"PeriodicalIF":3.0,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Resource Management Framework for Blockchain-Based Federated Learning in IoT Networks 物联网网络中基于区块链的联盟学习的新型资源管理框架
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-26 DOI: 10.1109/TSUSC.2024.3358915
Aman Mishra;Yash Garg;Om Jee Pandey;Mahendra K. Shukla;Athanasios V. Vasilakos;Rajesh M. Hegde
At present, the centralized learning models, used for IoT applications generating large amount of data, face several challenges such as bandwidth scarcity, more energy consumption, increased uses of computing resources, poor connectivity, high computational complexity, reduced privacy, and large latency towards data transfer. In order to address the aforementioned challenges, Blockchain-Enabled Federated Learning Networks (BFLNs) emerged recently, which deal with trained model parameters only, rather than raw data. BFLNs provide enhanced security along with improved energy-efficiency and Quality-of-Service (QoS). However, BFLNs suffer with the challenges of exponential increased action space in deciding various parameter levels towards training and block generation. Motivated by aforementioned challenges of BFLNs, in this work, we are proposing an actor-critic Reinforcement Learning (RL) method to model the Machine Learning Model Owner (MLMO) in selecting the optimal set of parameter levels, addressing the challenges of exponential grow of action space in BFLNs. Further, due to the implicit entropy exploration, actor-critic RL method balances the exploration-exploitation trade-off and shows better performance than most off-policy methods, on large discrete action spaces. Therefore, in this work, considering the mobile scenario of the devices, MLMO decides the data and energy levels that the mobile devices use for the training and determine the block generation rate. This leads to minimized system latency and reduced overall cost, while achieving the target accuracy. Specifically, we have used Proximal Policy Optimization (PPO) as an on-policy actor-critic method with it's two variants, one based on Monte Carlo (MC) returns and another based on Generalized Advantage Estimate (GAE). We analyzed that PPO has better exploration and sample efficiency, lesser training time, and consistently higher cumulative rewards, when compared to off-policy Deep Q-Network (DQN).
目前,用于产生大量数据的物联网应用的集中式学习模型面临着一些挑战,如带宽稀缺、能耗增加、计算资源使用增多、连接性差、计算复杂度高、隐私性降低以及数据传输延迟大等。为了应对上述挑战,最近出现了区块链联合学习网络(Blockchain-Enabled Federated Learning Networks,BFLNs),它只处理经过训练的模型参数,而不是原始数据。BFLNs 在提高能效和服务质量(QoS)的同时,还增强了安全性。然而,BFLNs 在决定训练和区块生成的各种参数水平时,面临着行动空间呈指数级增长的挑战。受 BFLNs 面临的上述挑战的启发,在这项工作中,我们提出了一种行为批判强化学习(RL)方法,以模拟机器学习模型所有者(MLMO)选择最佳参数水平集的过程,从而解决 BFLNs 行动空间呈指数增长的挑战。此外,由于隐式熵探索,演员批判 RL 方法平衡了探索与开发之间的权衡,在大型离散行动空间上比大多数非策略方法表现出更好的性能。因此,在这项工作中,考虑到设备的移动场景,MLMO 决定了移动设备用于训练的数据和能量水平,并决定了区块生成率。这样就能在实现目标精度的同时,最大限度地减少系统延迟,降低总体成本。具体来说,我们使用了近端策略优化(PPO)作为策略上的行为者批判方法,它有两种变体,一种基于蒙特卡罗(MC)回报,另一种基于广义优势估计(GAE)。我们分析发现,与非政策深度 Q 网络(DQN)相比,PPO 具有更好的探索和采样效率、更少的训练时间和持续更高的累积奖励。
{"title":"A Novel Resource Management Framework for Blockchain-Based Federated Learning in IoT Networks","authors":"Aman Mishra;Yash Garg;Om Jee Pandey;Mahendra K. Shukla;Athanasios V. Vasilakos;Rajesh M. Hegde","doi":"10.1109/TSUSC.2024.3358915","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3358915","url":null,"abstract":"At present, the centralized learning models, used for IoT applications generating large amount of data, face several challenges such as bandwidth scarcity, more energy consumption, increased uses of computing resources, poor connectivity, high computational complexity, reduced privacy, and large latency towards data transfer. In order to address the aforementioned challenges, Blockchain-Enabled Federated Learning Networks (BFLNs) emerged recently, which deal with trained model parameters only, rather than raw data. BFLNs provide enhanced security along with improved energy-efficiency and Quality-of-Service (QoS). However, BFLNs suffer with the challenges of exponential increased action space in deciding various parameter levels towards training and block generation. Motivated by aforementioned challenges of BFLNs, in this work, we are proposing an actor-critic Reinforcement Learning (RL) method to model the Machine Learning Model Owner (MLMO) in selecting the optimal set of parameter levels, addressing the challenges of exponential grow of action space in BFLNs. Further, due to the implicit entropy exploration, actor-critic RL method balances the exploration-exploitation trade-off and shows better performance than most off-policy methods, on large discrete action spaces. Therefore, in this work, considering the mobile scenario of the devices, MLMO decides the data and energy levels that the mobile devices use for the training and determine the block generation rate. This leads to minimized system latency and reduced overall cost, while achieving the target accuracy. Specifically, we have used Proximal Policy Optimization (PPO) as an on-policy actor-critic method with it's two variants, one based on Monte Carlo (MC) returns and another based on Generalized Advantage Estimate (GAE). We analyzed that PPO has better exploration and sample efficiency, lesser training time, and consistently higher cumulative rewards, when compared to off-policy Deep Q-Network (DQN).","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 4","pages":"648-660"},"PeriodicalIF":3.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Prototype-Empowered Kernel-Varying Convolutional Model for Imbalanced Sea State Estimation in IoT-Enabled Autonomous Ship 一种基于原型的核变化卷积模型用于物联网自主船舶的不平衡海况估计
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-12 DOI: 10.1109/TSUSC.2024.3353183
Mengna Liu;Xu Cheng;Fan Shi;Xiufeng Liu;Hongning Dai;Shengyong Chen
Sea State Estimation (SSE) is essential for Internet of Things (IoT)-enabled autonomous ships, which rely on favorable sea conditions for safe and efficient navigation. Traditional methods, such as wave buoys and radars, are costly, less accurate, and lack real-time capability. Model-driven methods, based on physical models of ship dynamics, are impractical due to wave randomness. Data-driven methods are limited by the data imbalance problem, as some sea states are more frequent and observable than others. To overcome these challenges, we propose a novel data-driven approach for SSE based on ship motion data. Our approach consists of three main components: a data preprocessing module, a parallel convolution feature extractor, and a theoretical-ensured distance-based classifier. The data preprocessing module aims to enhance the data quality and reduce sensor noise. The parallel convolution feature extractor uses a kernel-varying convolutional structure to capture distinctive features. The distance-based classifier learns representative prototypes for each sea state and assigns a sample to the nearest prototype based on a distance metric. The efficiency of our model is validated through experiments on two SSE datasets and the UEA archive, encompassing thirty multivariate time series classification tasks. The results reveal the generalizability and robustness of our approach.
海况估计(SSE)对于支持物联网(IoT)的自主船舶至关重要,这些船舶依赖于有利的海况进行安全高效的航行。传统的方法,如波浪浮标和雷达,成本高,精度低,缺乏实时能力。由于波浪的随机性,基于船舶动力学物理模型的模型驱动方法是不切实际的。数据驱动的方法受到数据不平衡问题的限制,因为一些海况比其他海况更频繁和更可观察。为了克服这些挑战,我们提出了一种新的基于船舶运动数据的SSE数据驱动方法。我们的方法由三个主要部分组成:数据预处理模块,并行卷积特征提取器和理论保证的基于距离的分类器。数据预处理模块旨在提高数据质量,降低传感器噪声。并行卷积特征提取器采用变核卷积结构捕获特征。基于距离的分类器学习每个海况的代表性原型,并根据距离度量将样本分配给最近的原型。通过两个SSE数据集和UEA存档的实验验证了我们模型的有效性,其中包括30个多变量时间序列分类任务。结果表明了该方法的通用性和鲁棒性。
{"title":"A Prototype-Empowered Kernel-Varying Convolutional Model for Imbalanced Sea State Estimation in IoT-Enabled Autonomous Ship","authors":"Mengna Liu;Xu Cheng;Fan Shi;Xiufeng Liu;Hongning Dai;Shengyong Chen","doi":"10.1109/TSUSC.2024.3353183","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3353183","url":null,"abstract":"Sea State Estimation (SSE) is essential for Internet of Things (IoT)-enabled autonomous ships, which rely on favorable sea conditions for safe and efficient navigation. Traditional methods, such as wave buoys and radars, are costly, less accurate, and lack real-time capability. Model-driven methods, based on physical models of ship dynamics, are impractical due to wave randomness. Data-driven methods are limited by the data imbalance problem, as some sea states are more frequent and observable than others. To overcome these challenges, we propose a novel data-driven approach for SSE based on ship motion data. Our approach consists of three main components: a data preprocessing module, a parallel convolution feature extractor, and a theoretical-ensured distance-based classifier. The data preprocessing module aims to enhance the data quality and reduce sensor noise. The parallel convolution feature extractor uses a kernel-varying convolutional structure to capture distinctive features. The distance-based classifier learns representative prototypes for each sea state and assigns a sample to the nearest prototype based on a distance metric. The efficiency of our model is validated through experiments on two SSE datasets and the UEA archive, encompassing thirty multivariate time series classification tasks. The results reveal the generalizability and robustness of our approach.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 6","pages":"862-873"},"PeriodicalIF":3.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancements in Accelerating Deep Neural Network Inference on AIoT Devices: A Survey AIoT设备上加速深度神经网络推理的研究进展
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-12 DOI: 10.1109/TSUSC.2024.3353176
Long Cheng;Yan Gu;Qingzhi Liu;Lei Yang;Cheng Liu;Ying Wang
The amalgamation of artificial intelligence with Internet of Things (AIoT) devices have seen a rapid surge in growth, largely due to the effective implementation of deep neural network (DNN) models across various domains. However, the deployment of DNNs on such devices comes with its own set of challenges, primarily related to computational capacity, storage, and energy efficiency. This survey offers an exhaustive review of techniques designed to accelerate DNN inference on AIoT devices, addressing these challenges head-on. We delve into critical model compression techniques designed to adapt to the limitations of devices and hardware optimization strategies that aim to boost efficiency. Furthermore, we examine parallelization methods that leverage parallel computing for swift inference, as well as novel optimization strategies that fine-tune the execution process. This survey also casts a future-forward glance at emerging trends, including advancements in mobile hardware, the co-design of software and hardware, privacy and security considerations, and DNN inference on AIoT devices with constrained resources. All in all, this survey aspires to serve as a holistic guide to advancements in the acceleration of DNN inference on AIoT devices, aiming to provide sustainable computing for upcoming IoT applications driven by artificial intelligence.
人工智能与物联网(AIoT)设备的融合已经出现了快速增长,这主要是由于深度神经网络(DNN)模型在各个领域的有效实施。然而,在这些设备上部署dnn也面临着一系列挑战,主要与计算能力、存储和能源效率有关。本调查对旨在加速AIoT设备上DNN推理的技术进行了详尽的回顾,正面解决了这些挑战。我们深入研究了关键的模型压缩技术,旨在适应旨在提高效率的设备和硬件优化策略的局限性。此外,我们还研究了利用并行计算进行快速推理的并行化方法,以及微调执行过程的新型优化策略。该调查还展望了未来的新兴趋势,包括移动硬件的进步、软件和硬件的协同设计、隐私和安全考虑,以及在资源受限的AIoT设备上的DNN推断。总而言之,本调查旨在作为AIoT设备上DNN推理加速进展的整体指南,旨在为即将到来的由人工智能驱动的物联网应用提供可持续的计算。
{"title":"Advancements in Accelerating Deep Neural Network Inference on AIoT Devices: A Survey","authors":"Long Cheng;Yan Gu;Qingzhi Liu;Lei Yang;Cheng Liu;Ying Wang","doi":"10.1109/TSUSC.2024.3353176","DOIUrl":"https://doi.org/10.1109/TSUSC.2024.3353176","url":null,"abstract":"The amalgamation of artificial intelligence with Internet of Things (AIoT) devices have seen a rapid surge in growth, largely due to the effective implementation of deep neural network (DNN) models across various domains. However, the deployment of DNNs on such devices comes with its own set of challenges, primarily related to computational capacity, storage, and energy efficiency. This survey offers an exhaustive review of techniques designed to accelerate DNN inference on AIoT devices, addressing these challenges head-on. We delve into critical model compression techniques designed to adapt to the limitations of devices and hardware optimization strategies that aim to boost efficiency. Furthermore, we examine parallelization methods that leverage parallel computing for swift inference, as well as novel optimization strategies that fine-tune the execution process. This survey also casts a future-forward glance at emerging trends, including advancements in mobile hardware, the co-design of software and hardware, privacy and security considerations, and DNN inference on AIoT devices with constrained resources. All in all, this survey aspires to serve as a holistic guide to advancements in the acceleration of DNN inference on AIoT devices, aiming to provide sustainable computing for upcoming IoT applications driven by artificial intelligence.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 6","pages":"830-847"},"PeriodicalIF":3.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Sustainable Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1