Kazumasa Sakivama, S. Kato, Y. Ishikawa, A. Hori, Abraham Monrroy
{"title":"Deep Learning on Large-Scale Muticore Clusters","authors":"Kazumasa Sakivama, S. Kato, Y. Ishikawa, A. Hori, Abraham Monrroy","doi":"10.1109/CAHPC.2018.8645860","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (CNNs) have achieved outstanding accuracy among conventional machine learning algorithms. Recent works have shown that large and complicated models, which take significant cost for training are needed to get higher accuracy. To train these models efficiently in high performance computers (HPCs), many parallelization techniques for CNNs have been developed. However, most techniques are mainly targeting GPUs and parallelizations for CPUs are not fully investigated. This paper explores CNN training performance on large-scale multicore clusters by optimizing intra-node processing and applying techniques of inter-node parallelization for multiple GPUs. Detailed experiments conducted on state-of-the-art multi-core processors using the openMP API and MPI framework demonstrated that Caffe-based CNNs can be accelerated by using well-designed multithreaded programs. We achieved at most 1.64 times speedup in convolution operations with devised lowering strategy compared to conventional lowering and acquired 772 times speedup with 864 nodes compared to one node.","PeriodicalId":307747,"journal":{"name":"2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAHPC.2018.8645860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Convolutional neural networks (CNNs) have achieved outstanding accuracy among conventional machine learning algorithms. Recent works have shown that large and complicated models, which take significant cost for training are needed to get higher accuracy. To train these models efficiently in high performance computers (HPCs), many parallelization techniques for CNNs have been developed. However, most techniques are mainly targeting GPUs and parallelizations for CPUs are not fully investigated. This paper explores CNN training performance on large-scale multicore clusters by optimizing intra-node processing and applying techniques of inter-node parallelization for multiple GPUs. Detailed experiments conducted on state-of-the-art multi-core processors using the openMP API and MPI framework demonstrated that Caffe-based CNNs can be accelerated by using well-designed multithreaded programs. We achieved at most 1.64 times speedup in convolution operations with devised lowering strategy compared to conventional lowering and acquired 772 times speedup with 864 nodes compared to one node.