{"title":"Performance of CPUs and GPUs on Deep Learning Models For Heterogeneous Datasets","authors":"N. S, Manu S Rao, Sagar B M, P. T, Cauvery N K","doi":"10.1109/ICECA55336.2022.10009148","DOIUrl":null,"url":null,"abstract":"Deep learning is a branch of Artificial Intelligence (AI) where neural networks are trained to learn patterns from large amounts of data. The primary issue raised by the growth in data volume and diversity of neural networks is selecting hardware accelerators that are effective and appropriate for the specified dataset and selected neural network. This paper studies the performance of CPU and GPU based on the input data size, size of data batches and type of neural network chosen. Four datasets were chosen for benchmark testing, these included a csv data file, a textual dataset and two image datasets. Suitable neural networks were chosen for given data sets. Tests were performed on Intel i5 9th gen CPU and NVIDIA GeForce GTX 1650 GPU. The results show that performance of CPU and GPU doesn't depend on the data format, but rather depends on the type of architecture of the neural network. Neural networks which support parallelization, provide performance boost in GPU s compared to CPUs. When ANN architecture was used, CPUs performed 1.2 times better than GPUs in terms of execution time. With deeper CNN models GPUs performed 8.8 times and with RNNs 4.90 times faster than CPU s. Linear relation between dataset size and training time was observed and GPUs outdid CPUs when batch size was increased irrespective of NN architecture.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECA55336.2022.10009148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning is a branch of Artificial Intelligence (AI) where neural networks are trained to learn patterns from large amounts of data. The primary issue raised by the growth in data volume and diversity of neural networks is selecting hardware accelerators that are effective and appropriate for the specified dataset and selected neural network. This paper studies the performance of CPU and GPU based on the input data size, size of data batches and type of neural network chosen. Four datasets were chosen for benchmark testing, these included a csv data file, a textual dataset and two image datasets. Suitable neural networks were chosen for given data sets. Tests were performed on Intel i5 9th gen CPU and NVIDIA GeForce GTX 1650 GPU. The results show that performance of CPU and GPU doesn't depend on the data format, but rather depends on the type of architecture of the neural network. Neural networks which support parallelization, provide performance boost in GPU s compared to CPUs. When ANN architecture was used, CPUs performed 1.2 times better than GPUs in terms of execution time. With deeper CNN models GPUs performed 8.8 times and with RNNs 4.90 times faster than CPU s. Linear relation between dataset size and training time was observed and GPUs outdid CPUs when batch size was increased irrespective of NN architecture.