Optimization of a Sparse Grid-Based Data Mining Kernel for Architectures Using AVX-512

Paul-Cristian Sarbu, H. Bungartz
{"title":"Optimization of a Sparse Grid-Based Data Mining Kernel for Architectures Using AVX-512","authors":"Paul-Cristian Sarbu, H. Bungartz","doi":"10.1109/CAHPC.2018.8645913","DOIUrl":null,"url":null,"abstract":"Sparse grids have already been successfully used in various high-performance computing (HPC) applications, including data mining. In this article, we take a legacy classification kernel previously optimized for the AVX2 instruction set and investigate the benefits of using the newer AVX-S12-based multi-and many-core architectures. In particular, the Knights Landing (KNL) processor is used to study the possible performance gains of the code. Not all kernels benefit equally from such architectures, therefore choices in optimization steps and KNL cluster and memory modes need to be filtered through the lens of the code implementation at hand. With a less traditional approach of manual vectorization through instruction-level intrinsics, our kernel provides a differently faceted look into the optimization process. Observations stem from results obtained for node-and cluster-level classification simulations with up to 2^28 multidimensional training data points, using the CooLMUC-3cluster of the Leibniz Supercomputing Center (LRZ) in Garching, Germany.","PeriodicalId":307747,"journal":{"name":"2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAHPC.2018.8645913","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Sparse grids have already been successfully used in various high-performance computing (HPC) applications, including data mining. In this article, we take a legacy classification kernel previously optimized for the AVX2 instruction set and investigate the benefits of using the newer AVX-S12-based multi-and many-core architectures. In particular, the Knights Landing (KNL) processor is used to study the possible performance gains of the code. Not all kernels benefit equally from such architectures, therefore choices in optimization steps and KNL cluster and memory modes need to be filtered through the lens of the code implementation at hand. With a less traditional approach of manual vectorization through instruction-level intrinsics, our kernel provides a differently faceted look into the optimization process. Observations stem from results obtained for node-and cluster-level classification simulations with up to 2^28 multidimensional training data points, using the CooLMUC-3cluster of the Leibniz Supercomputing Center (LRZ) in Garching, Germany.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于AVX-512架构的稀疏网格数据挖掘内核优化
稀疏网格已经成功地应用于各种高性能计算(HPC)应用,包括数据挖掘。在本文中,我们采用以前为AVX2指令集优化的遗留分类内核,并研究使用较新的基于avx - s12的多核和多核体系结构的好处。特别地,骑士登陆(KNL)处理器被用来研究代码可能的性能增益。并不是所有的内核都能从这样的体系结构中获得同样的好处,因此在优化步骤和KNL集群和内存模式方面的选择需要通过手边的代码实现进行筛选。我们的内核采用了一种不太传统的方法,即通过指令级的内在特性进行手动向量化,从而从不同的角度看待优化过程。观测结果来自节点和集群级别的分类模拟,使用德国加兴莱布尼茨超级计算中心(LRZ)的coolmuc -3集群,使用多达2^28个多维训练数据点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Assessing Time Predictability Features of ARM Big. LITTLE Multicores Impacts of Three Soft-Fault Models on Hybrid Parallel Asynchronous Iterative Methods Predicting the Performance Impact of Increasing Memory Bandwidth for Scientific Workflows From Java to FPGA: An Experience with the Intel HARP System Copyright
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1