揭示深度学习训练中的节能实践:迈向绿色人工智能的初步步骤

Tim Yarally, Luís Cruz, Daniel Feitosa, June Sallou, A. V. Deursen
{"title":"揭示深度学习训练中的节能实践:迈向绿色人工智能的初步步骤","authors":"Tim Yarally, Luís Cruz, Daniel Feitosa, June Sallou, A. V. Deursen","doi":"10.1109/CAIN58948.2023.00012","DOIUrl":null,"url":null,"abstract":"Modern AI practices all strive towards the same goal: better results. In the context of deep learning, the term \"results\" often refers to the achieved accuracy on a competitive problem set. In this paper, we adopt an idea from the emerging field of $\\color{green}{\\text{Green AI}}$ to consider energy consumption as a metric of equal importance to accuracy and to reduce any irrelevant tasks or energy usage. We examine the training stage of the deep learning pipeline from a sustainability perspective, through the study of hyperparameter tuning strategies and the model complexity, two factors vastly impacting the overall pipeline’s energy consumption. First, we investigate the effectiveness of grid search, random search and Bayesian optimisation during hyperparameter tuning, and we find that Bayesian optimisation significantly dominates the other strategies. Furthermore, we analyse the architecture of convolutional neural networks with the energy consumption of three prominent layer types: convolutional, linear and ReLU layers. The results show that convolutional layers are the most computationally expensive by a strong margin. Additionally, we observe diminishing returns in accuracy for more energy-hungry models. The overall energy consumption of training can be halved by reducing the network complexity. In conclusion, we highlight innovative and promising energy-efficient practices for training deep learning models. To expand the application of $\\color{green}{\\text{Green AI}}$, we advocate for a shift in the design of deep learning models, by considering the trade-off between energy efficiency and accuracy.","PeriodicalId":175580,"journal":{"name":"2023 IEEE/ACM 2nd International Conference on AI Engineering – Software Engineering for AI (CAIN)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Uncovering Energy-Efficient Practices in Deep Learning Training: Preliminary Steps Towards Green AI\",\"authors\":\"Tim Yarally, Luís Cruz, Daniel Feitosa, June Sallou, A. V. Deursen\",\"doi\":\"10.1109/CAIN58948.2023.00012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Modern AI practices all strive towards the same goal: better results. In the context of deep learning, the term \\\"results\\\" often refers to the achieved accuracy on a competitive problem set. In this paper, we adopt an idea from the emerging field of $\\\\color{green}{\\\\text{Green AI}}$ to consider energy consumption as a metric of equal importance to accuracy and to reduce any irrelevant tasks or energy usage. We examine the training stage of the deep learning pipeline from a sustainability perspective, through the study of hyperparameter tuning strategies and the model complexity, two factors vastly impacting the overall pipeline’s energy consumption. First, we investigate the effectiveness of grid search, random search and Bayesian optimisation during hyperparameter tuning, and we find that Bayesian optimisation significantly dominates the other strategies. Furthermore, we analyse the architecture of convolutional neural networks with the energy consumption of three prominent layer types: convolutional, linear and ReLU layers. The results show that convolutional layers are the most computationally expensive by a strong margin. Additionally, we observe diminishing returns in accuracy for more energy-hungry models. The overall energy consumption of training can be halved by reducing the network complexity. In conclusion, we highlight innovative and promising energy-efficient practices for training deep learning models. To expand the application of $\\\\color{green}{\\\\text{Green AI}}$, we advocate for a shift in the design of deep learning models, by considering the trade-off between energy efficiency and accuracy.\",\"PeriodicalId\":175580,\"journal\":{\"name\":\"2023 IEEE/ACM 2nd International Conference on AI Engineering – Software Engineering for AI (CAIN)\",\"volume\":\"64 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/ACM 2nd International Conference on AI Engineering – Software Engineering for AI (CAIN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CAIN58948.2023.00012\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 2nd International Conference on AI Engineering – Software Engineering for AI (CAIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAIN58948.2023.00012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

现代人工智能实践都朝着同一个目标努力:更好的结果。在深度学习的上下文中,术语“结果”通常是指在竞争性问题集上达到的准确性。在本文中,我们采用了新兴领域$\color{green}{\text{green AI}}$的想法,将能耗视为与准确性同等重要的度量,并减少任何不相关的任务或能源使用。我们从可持续性的角度考察了深度学习管道的训练阶段,通过研究超参数调整策略和模型复杂性,这两个因素极大地影响了整个管道的能量消耗。首先,我们研究了网格搜索、随机搜索和贝叶斯优化在超参数调优过程中的有效性,我们发现贝叶斯优化明显优于其他策略。此外,我们分析了卷积神经网络的结构与三个突出的层类型的能量消耗:卷积层,线性层和ReLU层。结果表明,卷积层的计算开销是最大的。此外,我们观察到,对于耗能更大的模型,准确性的回报在递减。通过降低网络的复杂性,训练的总能耗可以减少一半。总之,我们强调了训练深度学习模型的创新和有前途的节能实践。为了扩展$\color{green}{\text{green AI}}$的应用,我们主张在深度学习模型的设计上进行转变,考虑能源效率和准确性之间的权衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Uncovering Energy-Efficient Practices in Deep Learning Training: Preliminary Steps Towards Green AI
Modern AI practices all strive towards the same goal: better results. In the context of deep learning, the term "results" often refers to the achieved accuracy on a competitive problem set. In this paper, we adopt an idea from the emerging field of $\color{green}{\text{Green AI}}$ to consider energy consumption as a metric of equal importance to accuracy and to reduce any irrelevant tasks or energy usage. We examine the training stage of the deep learning pipeline from a sustainability perspective, through the study of hyperparameter tuning strategies and the model complexity, two factors vastly impacting the overall pipeline’s energy consumption. First, we investigate the effectiveness of grid search, random search and Bayesian optimisation during hyperparameter tuning, and we find that Bayesian optimisation significantly dominates the other strategies. Furthermore, we analyse the architecture of convolutional neural networks with the energy consumption of three prominent layer types: convolutional, linear and ReLU layers. The results show that convolutional layers are the most computationally expensive by a strong margin. Additionally, we observe diminishing returns in accuracy for more energy-hungry models. The overall energy consumption of training can be halved by reducing the network complexity. In conclusion, we highlight innovative and promising energy-efficient practices for training deep learning models. To expand the application of $\color{green}{\text{Green AI}}$, we advocate for a shift in the design of deep learning models, by considering the trade-off between energy efficiency and accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
safe.trAIn – Engineering and Assurance of a Driverless Regional Train Extensible Modeling Framework for Reliable Machine Learning System Analysis Maintaining and Monitoring AIOps Models Against Concept Drift Conceptualising Software Development Lifecycle for Engineering AI Planning Systems Reproducibility Requires Consolidated Artifacts
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1