Performance of CPUs and GPUs on Deep Learning Models For Heterogeneous Datasets

N. S, Manu S Rao, Sagar B M, P. T, Cauvery N K
{"title":"Performance of CPUs and GPUs on Deep Learning Models For Heterogeneous Datasets","authors":"N. S, Manu S Rao, Sagar B M, P. T, Cauvery N K","doi":"10.1109/ICECA55336.2022.10009148","DOIUrl":null,"url":null,"abstract":"Deep learning is a branch of Artificial Intelligence (AI) where neural networks are trained to learn patterns from large amounts of data. The primary issue raised by the growth in data volume and diversity of neural networks is selecting hardware accelerators that are effective and appropriate for the specified dataset and selected neural network. This paper studies the performance of CPU and GPU based on the input data size, size of data batches and type of neural network chosen. Four datasets were chosen for benchmark testing, these included a csv data file, a textual dataset and two image datasets. Suitable neural networks were chosen for given data sets. Tests were performed on Intel i5 9th gen CPU and NVIDIA GeForce GTX 1650 GPU. The results show that performance of CPU and GPU doesn't depend on the data format, but rather depends on the type of architecture of the neural network. Neural networks which support parallelization, provide performance boost in GPU s compared to CPUs. When ANN architecture was used, CPUs performed 1.2 times better than GPUs in terms of execution time. With deeper CNN models GPUs performed 8.8 times and with RNNs 4.90 times faster than CPU s. Linear relation between dataset size and training time was observed and GPUs outdid CPUs when batch size was increased irrespective of NN architecture.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECA55336.2022.10009148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning is a branch of Artificial Intelligence (AI) where neural networks are trained to learn patterns from large amounts of data. The primary issue raised by the growth in data volume and diversity of neural networks is selecting hardware accelerators that are effective and appropriate for the specified dataset and selected neural network. This paper studies the performance of CPU and GPU based on the input data size, size of data batches and type of neural network chosen. Four datasets were chosen for benchmark testing, these included a csv data file, a textual dataset and two image datasets. Suitable neural networks were chosen for given data sets. Tests were performed on Intel i5 9th gen CPU and NVIDIA GeForce GTX 1650 GPU. The results show that performance of CPU and GPU doesn't depend on the data format, but rather depends on the type of architecture of the neural network. Neural networks which support parallelization, provide performance boost in GPU s compared to CPUs. When ANN architecture was used, CPUs performed 1.2 times better than GPUs in terms of execution time. With deeper CNN models GPUs performed 8.8 times and with RNNs 4.90 times faster than CPU s. Linear relation between dataset size and training time was observed and GPUs outdid CPUs when batch size was increased irrespective of NN architecture.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
cpu和gpu在异构数据集深度学习模型上的性能研究
深度学习是人工智能(AI)的一个分支,训练神经网络从大量数据中学习模式。神经网络的数据量和多样性的增长所带来的主要问题是为指定的数据集和所选的神经网络选择有效和合适的硬件加速器。本文根据输入数据的大小、数据批次的大小和所选择的神经网络类型来研究CPU和GPU的性能。选择了四个数据集进行基准测试,其中包括一个csv数据文件,一个文本数据集和两个图像数据集。针对给定的数据集选择合适的神经网络。测试在Intel i5第9代CPU和NVIDIA GeForce GTX 1650 GPU上进行。结果表明,CPU和GPU的性能与数据格式无关,而与神经网络的结构类型有关。与cpu相比,支持并行化的神经网络可以提高GPU的性能。采用ANN架构时,cpu的执行时间是gpu的1.2倍。对于更深层的CNN模型,gpu比CPU快8.8倍,rnn比CPU快4.90倍。观察到数据集大小和训练时间之间存在线性关系,无论神经网络架构如何,当批处理大小增加时,gpu都优于CPU。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multi-Objective Artificial Flora Algorithm Based Optimal Handover Scheme for LTE-Advanced Networks Named Entity Recognition using CRF with Active Learning Algorithm in English Texts FPGA Implementation of Lattice-Wave Half-Order Digital Integrator using Radix-$2^{r}$ Digit Recoding Green Cloud Computing- Next Step Towards Eco-friendly Work Stations Diabetes Prediction using Support Vector Machine, Naive Bayes and Random Forest Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1