{"title":"Performance study of CFD Pressure-based solver on HPC","authors":"Hanan A. Hassan, O. O. Rabhy, Shimaa A. Mohamed","doi":"10.1109/ISSPIT51521.2020.9408945","DOIUrl":null,"url":null,"abstract":"High-Performance Computing (HPC) system is required for the customers’ needs of high CPU computations. This paper aims to study the communication performance and scalability of ANSYS Fluent applications on HPC system infrastructure. A benchmark of external flow over the aircraft wing is used. The scalability of the ANSYS Fluent application on the HPC system is assessed in terms of core speedup, core rating, and core solver efficiency. The study is conducted on a homogeneous cluster of 5 nodes. Our results reveal that the overall performance of the Intel MPI library is better than OpenMPI and pcmpi libraries for the same experimental design.","PeriodicalId":111385,"journal":{"name":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSPIT51521.2020.9408945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
High-Performance Computing (HPC) system is required for the customers’ needs of high CPU computations. This paper aims to study the communication performance and scalability of ANSYS Fluent applications on HPC system infrastructure. A benchmark of external flow over the aircraft wing is used. The scalability of the ANSYS Fluent application on the HPC system is assessed in terms of core speedup, core rating, and core solver efficiency. The study is conducted on a homogeneous cluster of 5 nodes. Our results reveal that the overall performance of the Intel MPI library is better than OpenMPI and pcmpi libraries for the same experimental design.