基于HPC架构的城市大气污染天气预报与交通模拟耦合多尺度模型

L. Kornyei, Z. Horváth, A. Ruopp, Á. Kovács, Bence Liszkai
{"title":"基于HPC架构的城市大气污染天气预报与交通模拟耦合多尺度模型","authors":"L. Kornyei, Z. Horváth, A. Ruopp, Á. Kovács, Bence Liszkai","doi":"10.1145/3440722.3440917","DOIUrl":null,"url":null,"abstract":"Urban air pollution is one of the global challenges to which over 3 million deaths are attributable yearly. Traffic is emitting over 40% of several contaminants, like NO2 [10]. The directive 2008/50/EC of the European Commission prescribes the assessment air quality by accumulating exceedance of contamination concentration limits over a one-year period using measurement stations, which may be supplemented by modeling techniques to provide adequate information on spatial distribution. Computational models do predict that small scale spatial fluctuation is expected on the street level: local air flow phenomena can cluster up pollutants or carry them away far from the location of emission [2]. The spread of the SARS-CoV-2 virus also interacts with urban air quality. Regions in lock down have highly reduced air pollution strain due to the drop of traffic [4]. Also, correlation between the fatality rate of a previous respiratory disease, SARS 2002, and Air Pollution Index suggests that bad air quality may double fatality rate [6]. At street level pollution dispersion highly depends on the daily weather, a one-year simulation low time scale model is needed. Additionally, to resolve street-level phenomena a cell size of 1 to 4 meters are utilized in these regions that requires CFD methods to use a simulation domain of 1 to 100 million cells. Memory and computational requirements for these tasks are enormous, so HPC architecture is needed to have reasonable results within a manageable time frame. To tackle this challenge, the Urban Air Pollution (UAP) workflow is developed as a pilot of the HiDALGO project [7], which is funded by the H2020 framework of the European Union. The pilot is designed in a modular way with the mindset to be developed into a digital twin model later. Its standardized interfaces enable multiple software to be used in a specific module. At its core, a traffic simulation implemented in SUMO is coupled with a CFD simulation. Currently OpenFOAM (v1906, v1912 and v2006) and Ansys Fluent (v19.2) are supported. This presentation focuses on the OpenFOAM implementation, as it proved more feasible and scalable on most HPC architectures. The incompressible unsteady Reynolds-averaged Navier– Stokes equations are solved with the PIMPLE method, Courant-number based adaptive time stepping and transient atmospheric boundary conditions. The single component NOx-type pollution is calculated independently as a scalar with transport equations along the flow field. Pollution emission is treated as a per cell volumetric source that changes in time. The initial condition is obtained from a steady state solution at the initial time with the SIMPLE method, using the identical, but stationary boundary conditions and source fields. Custom modules are developed for proper boundary condition and source term handling. The UAP workflow supports automatic 3D air flow geometry and traffic network generation from OpenStreetMap data. Ground and building information are used for geometry, road network for traffic, and further assets for visualization. The CFD 3D mesh generation is done by either an in-house octree-mesh generator, or the snappyHexMesh utility from OpenFOAM. Meteorological data for boundary conditions are acquired from ECMWF using the Polytope REST API [5] automatically for the user specified day and location. The values at the closes grid point is selected, transformed into the Euclidean coordinate system, and converted OpenFOAM readable file format. A custom OpenFOAM module ensures the proper handling of the altitude and time dependent boundary field. Background air quality data is acquired from the Copernicus AMS [3]. Results are validated against a local air quality sensor network which is under expansion for more accuracy. Traffic simulation or data can be obtained from external sources. For the pilot, a traffic sensor network of camera and loop detectors are installed in the Hungarian city of Győr. Sensory data is transmitted real time and is to be coupled into the simulation directly. Random traffic generation is also supported. Emission is computed by an in-house tool from traffic simulation results in SUMO data file format by applying the Copert model, interpolated to the CFD mesh and stored to an OpenFOAM readable file format. A custom OpenFOAM module is responsible for the timely read of this source term data and the proper adjustments on the equations. Calculation of the wind flow and the pollution dispersion is the most computationally heavy part of the workflow, as input file generation and traffic simulation top at 60 and 20 minutes on one node, respectively, for one day, in comparison to the minimum 2 hours for the smallest cell count CFD model. For benchmarking purposes, only the runtime of a small portion (15-60 minutes) of the scalable part of the OpenFOAM simulation is measured for the transient and 600 iterations for the steady state simulation. Primary benchmarks are issued on the local cluster PLEXI (18 node 2x6 core Intel X5650, 48GB RAM, 40Gb InfiniBand) with additional investigations on EAGLE (PSNC, Poznan, 1119 node 2x14 core Intel E5-2697v3, 64GB RAM, 56Gb InfiniBand) and the HAWK Test System (HLRS, Stuttgart, 5632 node 2x128 core AMD EPYC 7742, 256GB RAM, 200 Gb InfiniBand HDR200). Tuning OpenFOAM settings with optimized IO, multilevel decomposition and cell index renumbering improved speedup on PLEXI from 18 to 102 for the 1M cell count and from 49 to 77 on the 9M cell count model at 216 cores. On the HAWK Test System, speedups top at 133 for 1M and 401 for 9M cells, both at 2048 cores. On EAGLE, the 1M cell count model tops speedup at 104 at 448 cores. Saturating effect at one node core count suggests memory bandwidth limited calculations. Full day simulation runs were also done for areas within 5 cities (Győr, Madrid, Stuttgart, Herrenberg and Graz) with random traffic and different mesh sizes of ca. 0.8M and ca. 3M cells. The runtime of the full CFD module on PLEXI at 48 cores in 2.7 and 20 hours on average for the smaller and larger cell count, respectively. This puts the one-year simulation within reach for coarse meshes on PLEXI and finer meshes on the more powerful HPC architectures. Due to a high core count to memory channel ratio for AMD processors, poor single node parallel efficiency is expected on memory bandwidth limited applications, which supports our present findings, and are comparable with speedup results of other CFD software on the same hardware [9]. Node based speedup of certain OpenFOAM simulations, however, may show superlinear behavior [1]. In conclusion, the UAP workflow and the OpenFOAM implementation of the CFD module are on good track for reaching the goal of simulating one year within a manageable time frame. The few hours long simulation time for one day’s pollution also makes the current version feasible for forecasting. Future work includes using proper orthogonal decomposition, POD[8], which is a model order reduction method for eventually improving calculation time drastically while sacrificing limited accuracy. We also plan testing and benchmarking GPGPU based solvers, implementing reaction for pollutants and extend validation using new air quality measuring stations.","PeriodicalId":183674,"journal":{"name":"The International Conference on High Performance Computing in Asia-Pacific Region Companion","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Multi-scale Modelling of Urban Air Pollution with Coupled Weather Forecast and Traffic Simulation on HPC Architecture\",\"authors\":\"L. Kornyei, Z. Horváth, A. Ruopp, Á. Kovács, Bence Liszkai\",\"doi\":\"10.1145/3440722.3440917\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Urban air pollution is one of the global challenges to which over 3 million deaths are attributable yearly. Traffic is emitting over 40% of several contaminants, like NO2 [10]. The directive 2008/50/EC of the European Commission prescribes the assessment air quality by accumulating exceedance of contamination concentration limits over a one-year period using measurement stations, which may be supplemented by modeling techniques to provide adequate information on spatial distribution. Computational models do predict that small scale spatial fluctuation is expected on the street level: local air flow phenomena can cluster up pollutants or carry them away far from the location of emission [2]. The spread of the SARS-CoV-2 virus also interacts with urban air quality. Regions in lock down have highly reduced air pollution strain due to the drop of traffic [4]. Also, correlation between the fatality rate of a previous respiratory disease, SARS 2002, and Air Pollution Index suggests that bad air quality may double fatality rate [6]. At street level pollution dispersion highly depends on the daily weather, a one-year simulation low time scale model is needed. Additionally, to resolve street-level phenomena a cell size of 1 to 4 meters are utilized in these regions that requires CFD methods to use a simulation domain of 1 to 100 million cells. Memory and computational requirements for these tasks are enormous, so HPC architecture is needed to have reasonable results within a manageable time frame. To tackle this challenge, the Urban Air Pollution (UAP) workflow is developed as a pilot of the HiDALGO project [7], which is funded by the H2020 framework of the European Union. The pilot is designed in a modular way with the mindset to be developed into a digital twin model later. Its standardized interfaces enable multiple software to be used in a specific module. At its core, a traffic simulation implemented in SUMO is coupled with a CFD simulation. Currently OpenFOAM (v1906, v1912 and v2006) and Ansys Fluent (v19.2) are supported. This presentation focuses on the OpenFOAM implementation, as it proved more feasible and scalable on most HPC architectures. The incompressible unsteady Reynolds-averaged Navier– Stokes equations are solved with the PIMPLE method, Courant-number based adaptive time stepping and transient atmospheric boundary conditions. The single component NOx-type pollution is calculated independently as a scalar with transport equations along the flow field. Pollution emission is treated as a per cell volumetric source that changes in time. The initial condition is obtained from a steady state solution at the initial time with the SIMPLE method, using the identical, but stationary boundary conditions and source fields. Custom modules are developed for proper boundary condition and source term handling. The UAP workflow supports automatic 3D air flow geometry and traffic network generation from OpenStreetMap data. Ground and building information are used for geometry, road network for traffic, and further assets for visualization. The CFD 3D mesh generation is done by either an in-house octree-mesh generator, or the snappyHexMesh utility from OpenFOAM. Meteorological data for boundary conditions are acquired from ECMWF using the Polytope REST API [5] automatically for the user specified day and location. The values at the closes grid point is selected, transformed into the Euclidean coordinate system, and converted OpenFOAM readable file format. A custom OpenFOAM module ensures the proper handling of the altitude and time dependent boundary field. Background air quality data is acquired from the Copernicus AMS [3]. Results are validated against a local air quality sensor network which is under expansion for more accuracy. Traffic simulation or data can be obtained from external sources. For the pilot, a traffic sensor network of camera and loop detectors are installed in the Hungarian city of Győr. Sensory data is transmitted real time and is to be coupled into the simulation directly. Random traffic generation is also supported. Emission is computed by an in-house tool from traffic simulation results in SUMO data file format by applying the Copert model, interpolated to the CFD mesh and stored to an OpenFOAM readable file format. A custom OpenFOAM module is responsible for the timely read of this source term data and the proper adjustments on the equations. Calculation of the wind flow and the pollution dispersion is the most computationally heavy part of the workflow, as input file generation and traffic simulation top at 60 and 20 minutes on one node, respectively, for one day, in comparison to the minimum 2 hours for the smallest cell count CFD model. For benchmarking purposes, only the runtime of a small portion (15-60 minutes) of the scalable part of the OpenFOAM simulation is measured for the transient and 600 iterations for the steady state simulation. Primary benchmarks are issued on the local cluster PLEXI (18 node 2x6 core Intel X5650, 48GB RAM, 40Gb InfiniBand) with additional investigations on EAGLE (PSNC, Poznan, 1119 node 2x14 core Intel E5-2697v3, 64GB RAM, 56Gb InfiniBand) and the HAWK Test System (HLRS, Stuttgart, 5632 node 2x128 core AMD EPYC 7742, 256GB RAM, 200 Gb InfiniBand HDR200). Tuning OpenFOAM settings with optimized IO, multilevel decomposition and cell index renumbering improved speedup on PLEXI from 18 to 102 for the 1M cell count and from 49 to 77 on the 9M cell count model at 216 cores. On the HAWK Test System, speedups top at 133 for 1M and 401 for 9M cells, both at 2048 cores. On EAGLE, the 1M cell count model tops speedup at 104 at 448 cores. Saturating effect at one node core count suggests memory bandwidth limited calculations. Full day simulation runs were also done for areas within 5 cities (Győr, Madrid, Stuttgart, Herrenberg and Graz) with random traffic and different mesh sizes of ca. 0.8M and ca. 3M cells. The runtime of the full CFD module on PLEXI at 48 cores in 2.7 and 20 hours on average for the smaller and larger cell count, respectively. This puts the one-year simulation within reach for coarse meshes on PLEXI and finer meshes on the more powerful HPC architectures. Due to a high core count to memory channel ratio for AMD processors, poor single node parallel efficiency is expected on memory bandwidth limited applications, which supports our present findings, and are comparable with speedup results of other CFD software on the same hardware [9]. Node based speedup of certain OpenFOAM simulations, however, may show superlinear behavior [1]. In conclusion, the UAP workflow and the OpenFOAM implementation of the CFD module are on good track for reaching the goal of simulating one year within a manageable time frame. The few hours long simulation time for one day’s pollution also makes the current version feasible for forecasting. Future work includes using proper orthogonal decomposition, POD[8], which is a model order reduction method for eventually improving calculation time drastically while sacrificing limited accuracy. We also plan testing and benchmarking GPGPU based solvers, implementing reaction for pollutants and extend validation using new air quality measuring stations.\",\"PeriodicalId\":183674,\"journal\":{\"name\":\"The International Conference on High Performance Computing in Asia-Pacific Region Companion\",\"volume\":\"63 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The International Conference on High Performance Computing in Asia-Pacific Region Companion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3440722.3440917\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Conference on High Performance Computing in Asia-Pacific Region Companion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3440722.3440917","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

主要基准测试是在本地集群PLEXI(18节点2x6核心英特尔X5650, 48GB RAM, 40Gb InfiniBand)上发布的,并对EAGLE (PSNC, Poznan, 1119节点2x14核心英特尔E5-2697v3, 64GB RAM, 56Gb InfiniBand)和HAWK测试系统(HLRS,斯图加特,5632节点2x128核心AMD EPYC 7742, 256GB RAM, 200gb InfiniBand HDR200)进行了额外的调查。使用优化的IO、多层分解和单元索引重编号对OpenFOAM设置进行调优,使PLEXI在1M单元数下的加速从18提高到102,在216核的9M单元数模型上的加速从49提高到77。在HAWK测试系统上,1M单元的速度最高为133,9M单元的速度最高为401,都是2048核。在EAGLE上,1M单元数模型在448核时最高加速到104。一个节点核心计数的饱和效应表明内存带宽有限的计算。全天的模拟运行也在5个城市(Győr,马德里,斯图加特,赫伦堡和格拉茨)的区域进行,随机交通和不同的网格大小约为1.8 m和约3M单元。在48核的PLEXI上,对于较小和较大的单元数,完整的CFD模块的运行时间分别为2.7小时和20小时。这使得为期一年的模拟可以在PLEXI上实现粗网格,在更强大的HPC架构上实现细网格。由于AMD处理器的核数与内存通道比率较高,在内存带宽有限的应用中,单节点并行效率较差,这支持了我们目前的研究结果,并且与相同硬件上其他CFD软件的加速结果相当[9]。然而,某些OpenFOAM模拟的基于节点的加速可能会表现出超线性行为[1]。总之,UAP工作流和CFD模块的OpenFOAM实现在可管理的时间框架内实现了模拟一年的目标。对一天的污染进行几个小时的模拟时间也使当前版本的预测变得可行。未来的工作包括使用适当的正交分解,POD[8],这是一种模型降阶方法,最终在牺牲有限精度的情况下大幅提高计算时间。我们还计划对基于GPGPU的求解器进行测试和基准测试,实施污染物反应,并使用新的空气质量测量站扩展验证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-scale Modelling of Urban Air Pollution with Coupled Weather Forecast and Traffic Simulation on HPC Architecture
Urban air pollution is one of the global challenges to which over 3 million deaths are attributable yearly. Traffic is emitting over 40% of several contaminants, like NO2 [10]. The directive 2008/50/EC of the European Commission prescribes the assessment air quality by accumulating exceedance of contamination concentration limits over a one-year period using measurement stations, which may be supplemented by modeling techniques to provide adequate information on spatial distribution. Computational models do predict that small scale spatial fluctuation is expected on the street level: local air flow phenomena can cluster up pollutants or carry them away far from the location of emission [2]. The spread of the SARS-CoV-2 virus also interacts with urban air quality. Regions in lock down have highly reduced air pollution strain due to the drop of traffic [4]. Also, correlation between the fatality rate of a previous respiratory disease, SARS 2002, and Air Pollution Index suggests that bad air quality may double fatality rate [6]. At street level pollution dispersion highly depends on the daily weather, a one-year simulation low time scale model is needed. Additionally, to resolve street-level phenomena a cell size of 1 to 4 meters are utilized in these regions that requires CFD methods to use a simulation domain of 1 to 100 million cells. Memory and computational requirements for these tasks are enormous, so HPC architecture is needed to have reasonable results within a manageable time frame. To tackle this challenge, the Urban Air Pollution (UAP) workflow is developed as a pilot of the HiDALGO project [7], which is funded by the H2020 framework of the European Union. The pilot is designed in a modular way with the mindset to be developed into a digital twin model later. Its standardized interfaces enable multiple software to be used in a specific module. At its core, a traffic simulation implemented in SUMO is coupled with a CFD simulation. Currently OpenFOAM (v1906, v1912 and v2006) and Ansys Fluent (v19.2) are supported. This presentation focuses on the OpenFOAM implementation, as it proved more feasible and scalable on most HPC architectures. The incompressible unsteady Reynolds-averaged Navier– Stokes equations are solved with the PIMPLE method, Courant-number based adaptive time stepping and transient atmospheric boundary conditions. The single component NOx-type pollution is calculated independently as a scalar with transport equations along the flow field. Pollution emission is treated as a per cell volumetric source that changes in time. The initial condition is obtained from a steady state solution at the initial time with the SIMPLE method, using the identical, but stationary boundary conditions and source fields. Custom modules are developed for proper boundary condition and source term handling. The UAP workflow supports automatic 3D air flow geometry and traffic network generation from OpenStreetMap data. Ground and building information are used for geometry, road network for traffic, and further assets for visualization. The CFD 3D mesh generation is done by either an in-house octree-mesh generator, or the snappyHexMesh utility from OpenFOAM. Meteorological data for boundary conditions are acquired from ECMWF using the Polytope REST API [5] automatically for the user specified day and location. The values at the closes grid point is selected, transformed into the Euclidean coordinate system, and converted OpenFOAM readable file format. A custom OpenFOAM module ensures the proper handling of the altitude and time dependent boundary field. Background air quality data is acquired from the Copernicus AMS [3]. Results are validated against a local air quality sensor network which is under expansion for more accuracy. Traffic simulation or data can be obtained from external sources. For the pilot, a traffic sensor network of camera and loop detectors are installed in the Hungarian city of Győr. Sensory data is transmitted real time and is to be coupled into the simulation directly. Random traffic generation is also supported. Emission is computed by an in-house tool from traffic simulation results in SUMO data file format by applying the Copert model, interpolated to the CFD mesh and stored to an OpenFOAM readable file format. A custom OpenFOAM module is responsible for the timely read of this source term data and the proper adjustments on the equations. Calculation of the wind flow and the pollution dispersion is the most computationally heavy part of the workflow, as input file generation and traffic simulation top at 60 and 20 minutes on one node, respectively, for one day, in comparison to the minimum 2 hours for the smallest cell count CFD model. For benchmarking purposes, only the runtime of a small portion (15-60 minutes) of the scalable part of the OpenFOAM simulation is measured for the transient and 600 iterations for the steady state simulation. Primary benchmarks are issued on the local cluster PLEXI (18 node 2x6 core Intel X5650, 48GB RAM, 40Gb InfiniBand) with additional investigations on EAGLE (PSNC, Poznan, 1119 node 2x14 core Intel E5-2697v3, 64GB RAM, 56Gb InfiniBand) and the HAWK Test System (HLRS, Stuttgart, 5632 node 2x128 core AMD EPYC 7742, 256GB RAM, 200 Gb InfiniBand HDR200). Tuning OpenFOAM settings with optimized IO, multilevel decomposition and cell index renumbering improved speedup on PLEXI from 18 to 102 for the 1M cell count and from 49 to 77 on the 9M cell count model at 216 cores. On the HAWK Test System, speedups top at 133 for 1M and 401 for 9M cells, both at 2048 cores. On EAGLE, the 1M cell count model tops speedup at 104 at 448 cores. Saturating effect at one node core count suggests memory bandwidth limited calculations. Full day simulation runs were also done for areas within 5 cities (Győr, Madrid, Stuttgart, Herrenberg and Graz) with random traffic and different mesh sizes of ca. 0.8M and ca. 3M cells. The runtime of the full CFD module on PLEXI at 48 cores in 2.7 and 20 hours on average for the smaller and larger cell count, respectively. This puts the one-year simulation within reach for coarse meshes on PLEXI and finer meshes on the more powerful HPC architectures. Due to a high core count to memory channel ratio for AMD processors, poor single node parallel efficiency is expected on memory bandwidth limited applications, which supports our present findings, and are comparable with speedup results of other CFD software on the same hardware [9]. Node based speedup of certain OpenFOAM simulations, however, may show superlinear behavior [1]. In conclusion, the UAP workflow and the OpenFOAM implementation of the CFD module are on good track for reaching the goal of simulating one year within a manageable time frame. The few hours long simulation time for one day’s pollution also makes the current version feasible for forecasting. Future work includes using proper orthogonal decomposition, POD[8], which is a model order reduction method for eventually improving calculation time drastically while sacrificing limited accuracy. We also plan testing and benchmarking GPGPU based solvers, implementing reaction for pollutants and extend validation using new air quality measuring stations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multi-scale Modelling of Urban Air Pollution with Coupled Weather Forecast and Traffic Simulation on HPC Architecture Node-level Performance Optimizations in CFD Codes A Comparison of Parallel Profiling Tools for Programs utilizing the FFT An efficient halo approach for Euler-Lagrange simulations based on MPI-3 shared memory Efficient Parallel Multigrid Method on Intel Xeon Phi Clusters
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1