Pattern-based Autotuning of OpenMP Loops using Graph Neural Networks

Akashnil Dutta, J. Alcaraz, Ali TehraniJamsaz, A. Sikora, Eduardo César, A. Jannesari
{"title":"Pattern-based Autotuning of OpenMP Loops using Graph Neural Networks","authors":"Akashnil Dutta, J. Alcaraz, Ali TehraniJamsaz, A. Sikora, Eduardo César, A. Jannesari","doi":"10.1109/AI4S56813.2022.00010","DOIUrl":null,"url":null,"abstract":"Stagnation of Moore's law has led to the increased adoption of parallel programming for enhancing performance of scientific applications. Frequently occurring code and design patterns in scientific applications are often used for transforming serial code to parallel. But, identifying these patterns is not easy. To this end, we propose using Graph Neural Networks for modeling code flow graphs to identify patterns in such parallel code. Additionally, identifying the runtime parameters for best performing parallel code is also challenging. We propose a pattern-guided deep learning based tuning approach, to help identify the best runtime parameters for OpenMP loops. Overall, we aim to identify commonly occurring patterns in parallel loops and use these patterns to guide auto-tuning efforts. We validate our hypothesis on 20 different applications from Polybench, and STREAM benchmark suites. This deep learning-based approach can identify the considered patterns with an overall accuracy of 91%. We validate the usefulness of using patterns for auto-tuning on tuning the number of threads, scheduling policies and chunk size on a single socket system, and the thread count and affinity on a multi-socket machine. Our approach achieves geometric mean speedups of $1.1\\times$ and $4.7\\times$ respectively over default OpenMP configurations, compared to brute-force speedups of $1.27\\times$ and $4.93\\times$ respectively.","PeriodicalId":262536,"journal":{"name":"2022 IEEE/ACM International Workshop on Artificial Intelligence and Machine Learning for Scientific Applications (AI4S)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM International Workshop on Artificial Intelligence and Machine Learning for Scientific Applications (AI4S)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AI4S56813.2022.00010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Stagnation of Moore's law has led to the increased adoption of parallel programming for enhancing performance of scientific applications. Frequently occurring code and design patterns in scientific applications are often used for transforming serial code to parallel. But, identifying these patterns is not easy. To this end, we propose using Graph Neural Networks for modeling code flow graphs to identify patterns in such parallel code. Additionally, identifying the runtime parameters for best performing parallel code is also challenging. We propose a pattern-guided deep learning based tuning approach, to help identify the best runtime parameters for OpenMP loops. Overall, we aim to identify commonly occurring patterns in parallel loops and use these patterns to guide auto-tuning efforts. We validate our hypothesis on 20 different applications from Polybench, and STREAM benchmark suites. This deep learning-based approach can identify the considered patterns with an overall accuracy of 91%. We validate the usefulness of using patterns for auto-tuning on tuning the number of threads, scheduling policies and chunk size on a single socket system, and the thread count and affinity on a multi-socket machine. Our approach achieves geometric mean speedups of $1.1\times$ and $4.7\times$ respectively over default OpenMP configurations, compared to brute-force speedups of $1.27\times$ and $4.93\times$ respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于模式的OpenMP循环自动调谐使用图神经网络
摩尔定律的停滞导致并行编程越来越多地被采用,以提高科学应用的性能。科学应用中经常出现的代码和设计模式经常用于将串行代码转换为并行代码。但是,识别这些模式并不容易。为此,我们建议使用图神经网络来建模代码流图,以识别这种并行代码中的模式。此外,确定最佳并行代码的运行时参数也具有挑战性。我们提出了一种基于模式引导的深度学习调优方法,以帮助确定OpenMP循环的最佳运行时参数。总的来说,我们的目标是确定并行循环中常见的模式,并使用这些模式来指导自动调优工作。我们在来自Polybench和STREAM基准套件的20个不同应用程序上验证了我们的假设。这种基于深度学习的方法可以识别所考虑的模式,总体准确率为91%。我们验证了使用自动调优模式在调优单个套接字系统上的线程数量、调度策略和块大小,以及在多套接字机器上的线程数和关联方面的有用性。我们的方法在默认OpenMP配置上分别实现了$1.1\times$和$4.7\times$的几何平均加速,而蛮力加速分别为$1.27\times$和$4.93\times$。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Case Study on Coupling OpenFOAM with Different Machine Learning Frameworks Scalable Integration of Computational Physics Simulations with Machine Learning AI4S 22 Workshop Organization Ensuring AI For Science is Science: Making Randomness Portable Pattern-based Autotuning of OpenMP Loops using Graph Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1