MPI分析和跟踪工具的开销分析

S. Hunold, Jordy I. Ajanohoun, Ioannis Vardas, J. Träff
{"title":"MPI分析和跟踪工具的开销分析","authors":"S. Hunold, Jordy I. Ajanohoun, Ioannis Vardas, J. Träff","doi":"10.1145/3526063.3535353","DOIUrl":null,"url":null,"abstract":"MPI performance analysis tools are important instruments for finding performance bottlenecks in large-scale MPI applications. These tools commonly support either the profiling or the tracing of parallel applications. Depending on the type of analysis, the use of such a performance analysis tool may entail a significant runtime overhead on the monitored parallel application. However, overheads can occur in different stages of the performance analysis with varying severity, e.g., the overhead when initializing an MPI context is typically less problematic than when monitoring a high number of short-lived MPI function calls. In this work, we precisely define the different types of overheads that performance engineers may encounter when applying performance analysis tools. In the context of performance tuning, it is crucial to avoid delaying individual events (e.g., function calls) when monitoring MPI applications, as otherwise performance bottlenecks may not show up in the same spot as when running the applications without applying a performance analysis tool. We empirically examine the different types of overheads associated with popular performance analysis tools for a set of well-known proxy applications and categorize the tools according to our findings. Our study shows that although the investigated MPI profiling and tracing tools exhibit a rather unique overhead footprint, they hardly influence the net time of an MPI application, which is the time between the Init and Finalize calls. Performance engineers should be aware of all types of overheads associated with each tool to avoid very costly batch jobs.","PeriodicalId":244248,"journal":{"name":"Proceedings of the 2nd Workshop on Performance EngineeRing, Modelling, Analysis, and VisualizatiOn Strategy","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Overhead Analysis of MPI Profiling and Tracing Tools\",\"authors\":\"S. Hunold, Jordy I. Ajanohoun, Ioannis Vardas, J. Träff\",\"doi\":\"10.1145/3526063.3535353\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"MPI performance analysis tools are important instruments for finding performance bottlenecks in large-scale MPI applications. These tools commonly support either the profiling or the tracing of parallel applications. Depending on the type of analysis, the use of such a performance analysis tool may entail a significant runtime overhead on the monitored parallel application. However, overheads can occur in different stages of the performance analysis with varying severity, e.g., the overhead when initializing an MPI context is typically less problematic than when monitoring a high number of short-lived MPI function calls. In this work, we precisely define the different types of overheads that performance engineers may encounter when applying performance analysis tools. In the context of performance tuning, it is crucial to avoid delaying individual events (e.g., function calls) when monitoring MPI applications, as otherwise performance bottlenecks may not show up in the same spot as when running the applications without applying a performance analysis tool. We empirically examine the different types of overheads associated with popular performance analysis tools for a set of well-known proxy applications and categorize the tools according to our findings. Our study shows that although the investigated MPI profiling and tracing tools exhibit a rather unique overhead footprint, they hardly influence the net time of an MPI application, which is the time between the Init and Finalize calls. Performance engineers should be aware of all types of overheads associated with each tool to avoid very costly batch jobs.\",\"PeriodicalId\":244248,\"journal\":{\"name\":\"Proceedings of the 2nd Workshop on Performance EngineeRing, Modelling, Analysis, and VisualizatiOn Strategy\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2nd Workshop on Performance EngineeRing, Modelling, Analysis, and VisualizatiOn Strategy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3526063.3535353\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd Workshop on Performance EngineeRing, Modelling, Analysis, and VisualizatiOn Strategy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3526063.3535353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

MPI性能分析工具是在大规模MPI应用程序中发现性能瓶颈的重要工具。这些工具通常支持并行应用程序的分析或跟踪。根据分析类型的不同,使用这样的性能分析工具可能会在被监视的并行应用程序上带来很大的运行时开销。然而,开销可能在性能分析的不同阶段以不同的严重程度出现,例如,初始化MPI上下文时的开销通常比监视大量短期MPI函数调用时的开销问题要小。在这项工作中,我们精确地定义了性能工程师在应用性能分析工具时可能遇到的不同类型的开销。在性能调优的上下文中,在监视MPI应用程序时,避免延迟单个事件(例如,函数调用)是至关重要的,否则在不应用性能分析工具的情况下运行应用程序时,性能瓶颈可能不会出现在同一位置。我们对一组知名代理应用程序的流行性能分析工具进行了不同类型的开销测试,并根据我们的发现对这些工具进行了分类。我们的研究表明,尽管所调查的MPI分析和跟踪工具显示出相当独特的开销占用,但它们几乎不会影响MPI应用程序的净时间,即Init和Finalize调用之间的时间。性能工程师应该了解与每个工具相关的所有类型的开销,以避免非常昂贵的批处理作业。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An Overhead Analysis of MPI Profiling and Tracing Tools
MPI performance analysis tools are important instruments for finding performance bottlenecks in large-scale MPI applications. These tools commonly support either the profiling or the tracing of parallel applications. Depending on the type of analysis, the use of such a performance analysis tool may entail a significant runtime overhead on the monitored parallel application. However, overheads can occur in different stages of the performance analysis with varying severity, e.g., the overhead when initializing an MPI context is typically less problematic than when monitoring a high number of short-lived MPI function calls. In this work, we precisely define the different types of overheads that performance engineers may encounter when applying performance analysis tools. In the context of performance tuning, it is crucial to avoid delaying individual events (e.g., function calls) when monitoring MPI applications, as otherwise performance bottlenecks may not show up in the same spot as when running the applications without applying a performance analysis tool. We empirically examine the different types of overheads associated with popular performance analysis tools for a set of well-known proxy applications and categorize the tools according to our findings. Our study shows that although the investigated MPI profiling and tracing tools exhibit a rather unique overhead footprint, they hardly influence the net time of an MPI application, which is the time between the Init and Finalize calls. Performance engineers should be aware of all types of overheads associated with each tool to avoid very costly batch jobs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Performance Evaluation Through Simulation with SimGrid PERMAVOST'22 Discussion Panel: Domain Scientists vs. Performance Analysis Tools: Advancing in HPC Integrating Visualization (and Visualization Experts) with Performance Analysis An Overhead Analysis of MPI Profiling and Tracing Tools Server-Side Workload Identification for HPC I/O Requests
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1