{"title":"Improve Machine Learning carbon footprint using Nvidia GPU and Mixed Precision training for classification algorithms","authors":"Andrew Antonopoulos","doi":"arxiv-2409.07853","DOIUrl":null,"url":null,"abstract":"This study was part of my dissertation for my master degree and compares the\npower consumption using the default floating point (32bit) and Nvidia mixed\nprecision (16bit and 32bit) while training a classification ML model. A custom\nPC with specific hardware was built to perform the experiments, and different\nML hyper-parameters, such as batch size, neurons, and epochs, were chosen to\nbuild Deep Neural Networks (DNN). Additionally, various software was used\nduring the experiments to collect the power consumption data in Watts from the\nGraphics Processing Unit (GPU), Central Processing Unit (CPU), Random Access\nMemory (RAM) and manually from a wattmeter connected to the wall. A\nbenchmarking test with default hyper parameter values for the DNN was used as a\nreference, while the experiments used a combination of different settings. The\nresults were recorded in Excel, and descriptive statistics were chosen to\ncalculate the mean between the groups and compare them using graphs and tables.\nThe outcome was positive when using mixed precision combined with specific\nhyper-parameters. Compared to the benchmarking, the optimisation for the\nclassification reduced the power consumption between 7 and 11 Watts. Similarly,\nthe carbon footprint is reduced because the calculation uses the same power\nconsumption data. Still, a consideration is required when configuring\nhyper-parameters because it can negatively affect hardware performance.\nHowever, this research required inferential statistics, specifically ANOVA and\nT-test, to compare the relationship between the means. Furthermore, tests\nindicated no statistical significance of the relationship between the\nbenchmarking and experiments. However, a more extensive implementation with a\ncluster of GPUs can increase the sample size significantly, as it is an\nessential factor and can change the outcome of the statistical analysis.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07853","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This study was part of my dissertation for my master degree and compares the
power consumption using the default floating point (32bit) and Nvidia mixed
precision (16bit and 32bit) while training a classification ML model. A custom
PC with specific hardware was built to perform the experiments, and different
ML hyper-parameters, such as batch size, neurons, and epochs, were chosen to
build Deep Neural Networks (DNN). Additionally, various software was used
during the experiments to collect the power consumption data in Watts from the
Graphics Processing Unit (GPU), Central Processing Unit (CPU), Random Access
Memory (RAM) and manually from a wattmeter connected to the wall. A
benchmarking test with default hyper parameter values for the DNN was used as a
reference, while the experiments used a combination of different settings. The
results were recorded in Excel, and descriptive statistics were chosen to
calculate the mean between the groups and compare them using graphs and tables.
The outcome was positive when using mixed precision combined with specific
hyper-parameters. Compared to the benchmarking, the optimisation for the
classification reduced the power consumption between 7 and 11 Watts. Similarly,
the carbon footprint is reduced because the calculation uses the same power
consumption data. Still, a consideration is required when configuring
hyper-parameters because it can negatively affect hardware performance.
However, this research required inferential statistics, specifically ANOVA and
T-test, to compare the relationship between the means. Furthermore, tests
indicated no statistical significance of the relationship between the
benchmarking and experiments. However, a more extensive implementation with a
cluster of GPUs can increase the sample size significantly, as it is an
essential factor and can change the outcome of the statistical analysis.