Xi Yang, Jianbin Fang, Jing Chen, Chengkun Wu, T. Tang, Kai Lu
{"title":"推荐系统的高性能坐标下降矩阵分解","authors":"Xi Yang, Jianbin Fang, Jing Chen, Chengkun Wu, T. Tang, Kai Lu","doi":"10.1145/3075564.3077625","DOIUrl":null,"url":null,"abstract":"Coordinate descent (CD) has been proved to be an effective technique for matrix factorization (MF) in recommender systems. To speed up factorizing performance, various methods of implementing parallel CDMF have been proposed to leverage modern multi-core CPUs and many-core GPUs. Existing implementations are limited in either speed or portability (constrained to certain platforms). In this paper, we present an efficient and portable CDMF solver for recommender systems. On the one hand, we diagnose the baseline implementation and observe that it lacks the awareness of the hierarchical thread organization on modern hardware and the data variance of the rating matrix. Thus, we apply the thread batching technique and the load balancing technique to achieve high performance. On the other hand, we implement the CDMF solver in OpenCL so that it can run on various platforms. Based on the architectural specifics, we customize code variants to efficiently map them to the underlying hardware. The experimental results show that our implementation performs 2x faster on dual-socket Intel Xeon CPUs and 22x faster on an NVIDIA K20c GPU than the baseline implementations. When taking the CDMF solver as a benchmark, we observe that it runs 2.4x faster on the GPU than on the CPUs, whereas it achieves competitive performance on Intel MIC against the CPUs.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"High Performance Coordinate Descent Matrix Factorization for Recommender Systems\",\"authors\":\"Xi Yang, Jianbin Fang, Jing Chen, Chengkun Wu, T. Tang, Kai Lu\",\"doi\":\"10.1145/3075564.3077625\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Coordinate descent (CD) has been proved to be an effective technique for matrix factorization (MF) in recommender systems. To speed up factorizing performance, various methods of implementing parallel CDMF have been proposed to leverage modern multi-core CPUs and many-core GPUs. Existing implementations are limited in either speed or portability (constrained to certain platforms). In this paper, we present an efficient and portable CDMF solver for recommender systems. On the one hand, we diagnose the baseline implementation and observe that it lacks the awareness of the hierarchical thread organization on modern hardware and the data variance of the rating matrix. Thus, we apply the thread batching technique and the load balancing technique to achieve high performance. On the other hand, we implement the CDMF solver in OpenCL so that it can run on various platforms. Based on the architectural specifics, we customize code variants to efficiently map them to the underlying hardware. The experimental results show that our implementation performs 2x faster on dual-socket Intel Xeon CPUs and 22x faster on an NVIDIA K20c GPU than the baseline implementations. When taking the CDMF solver as a benchmark, we observe that it runs 2.4x faster on the GPU than on the CPUs, whereas it achieves competitive performance on Intel MIC against the CPUs.\",\"PeriodicalId\":398898,\"journal\":{\"name\":\"Proceedings of the Computing Frontiers Conference\",\"volume\":\"75 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Computing Frontiers Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3075564.3077625\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Computing Frontiers Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3075564.3077625","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
High Performance Coordinate Descent Matrix Factorization for Recommender Systems
Coordinate descent (CD) has been proved to be an effective technique for matrix factorization (MF) in recommender systems. To speed up factorizing performance, various methods of implementing parallel CDMF have been proposed to leverage modern multi-core CPUs and many-core GPUs. Existing implementations are limited in either speed or portability (constrained to certain platforms). In this paper, we present an efficient and portable CDMF solver for recommender systems. On the one hand, we diagnose the baseline implementation and observe that it lacks the awareness of the hierarchical thread organization on modern hardware and the data variance of the rating matrix. Thus, we apply the thread batching technique and the load balancing technique to achieve high performance. On the other hand, we implement the CDMF solver in OpenCL so that it can run on various platforms. Based on the architectural specifics, we customize code variants to efficiently map them to the underlying hardware. The experimental results show that our implementation performs 2x faster on dual-socket Intel Xeon CPUs and 22x faster on an NVIDIA K20c GPU than the baseline implementations. When taking the CDMF solver as a benchmark, we observe that it runs 2.4x faster on the GPU than on the CPUs, whereas it achieves competitive performance on Intel MIC against the CPUs.