Program Analysis and Machine Learning–based Approach to Predict Power Consumption of CUDA Kernel

IF 0.7 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS ACM Transactions on Modeling and Performance Evaluation of Computing Systems Pub Date : 2023-06-05 DOI:10.1145/3603533
Gargi Alavani, Jineet Desai, Snehanshu Saha, S. Sarkar
{"title":"Program Analysis and Machine Learning–based Approach to Predict Power Consumption of CUDA Kernel","authors":"Gargi Alavani, Jineet Desai, Snehanshu Saha, S. Sarkar","doi":"10.1145/3603533","DOIUrl":null,"url":null,"abstract":"The General Purpose Graphics Processing Unit has secured a prominent position in the High-Performance Computing world due to its performance gain and programmability. Understanding the relationship between Graphics Processing Unit (GPU) power consumption and program features can aid developers in building energy-efficient sustainable applications. In this work, we propose a static analysis-based power model built using machine learning techniques. We have investigated six machine learning models across three NVIDIA GPU architectures: Kepler, Maxwell, and Volta with Random Forest, Extra Trees, Gradient Boosting, CatBoost, and XGBoost reporting favorable results. We observed that the XGBoost technique-based prediction model is the most efficient technique with an R2 value of 0.9646 on Volta Architecture. The dataset used for these techniques includes kernels from different benchmarks suits, sizes, nature (e.g., compute-bound, memory-bound), and complexity (e.g., control divergence, memory access patterns). Experimental results suggest that the proposed solution can help developers precisely predict GPU applications power consumption using program analysis across GPU architectures. Developers can use this approach to refactor their code to build energy-efficient GPU applications.","PeriodicalId":56350,"journal":{"name":"ACM Transactions on Modeling and Performance Evaluation of Computing Systems","volume":"8 1","pages":"1 - 24"},"PeriodicalIF":0.7000,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Modeling and Performance Evaluation of Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3603533","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 1

Abstract

The General Purpose Graphics Processing Unit has secured a prominent position in the High-Performance Computing world due to its performance gain and programmability. Understanding the relationship between Graphics Processing Unit (GPU) power consumption and program features can aid developers in building energy-efficient sustainable applications. In this work, we propose a static analysis-based power model built using machine learning techniques. We have investigated six machine learning models across three NVIDIA GPU architectures: Kepler, Maxwell, and Volta with Random Forest, Extra Trees, Gradient Boosting, CatBoost, and XGBoost reporting favorable results. We observed that the XGBoost technique-based prediction model is the most efficient technique with an R2 value of 0.9646 on Volta Architecture. The dataset used for these techniques includes kernels from different benchmarks suits, sizes, nature (e.g., compute-bound, memory-bound), and complexity (e.g., control divergence, memory access patterns). Experimental results suggest that the proposed solution can help developers precisely predict GPU applications power consumption using program analysis across GPU architectures. Developers can use this approach to refactor their code to build energy-efficient GPU applications.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于程序分析和机器学习的CUDA内核功耗预测方法
通用图形处理单元由于其性能提升和可编程性,在高性能计算领域占据了突出的地位。了解图形处理单元(GPU)功耗和程序功能之间的关系可以帮助开发人员构建节能的可持续应用程序。在这项工作中,我们提出了一个使用机器学习技术构建的基于静态分析的功率模型。我们在三种NVIDIA GPU架构上研究了六种机器学习模型:Kepler, Maxwell和Volta,随机森林,Extra Trees,梯度增强,CatBoost和XGBoost报告了有利的结果。我们观察到基于XGBoost技术的预测模型是最有效的技术,在Volta架构上R2值为0.9646。用于这些技术的数据集包括来自不同基准测试的内核、大小、性质(例如,计算约束、内存约束)和复杂性(例如,控制发散、内存访问模式)。实验结果表明,该解决方案可以帮助开发人员通过跨GPU架构的程序分析来精确预测GPU应用程序的功耗。开发人员可以使用这种方法重构代码以构建节能的GPU应用程序。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.10
自引率
0.00%
发文量
9
期刊最新文献
Configuring and Coordinating End-to-End QoS for Emerging Storage Infrastructure An approximation method for a non-preemptive multiserver queue with quasi-Poisson arrivals From compositional Petri Net modeling to macro and micro simulation by means of Stochastic Simulation and Agent-Based models No-regret Caching via Online Mirror Descent Optimal Pricing in a Single Server System
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1