Pub Date : 2019-11-01DOI: 10.1109/UrgentHPC49580.2019.00010
J. Mandel, M. Vejmelka, A. Kochanski, A. Farguell, J. Haley, D. Mallia, K. Hilburn
We present an interactive HPC framework for coupled fire and weather simulations. The system is suitable for urgent simulations and forecast of wildfire propagation and smoke. It does not require expert knowledge to set up and run the forecasts. The core of the system is a coupled weather, wildland fire, fuel moisture, and smoke model, running in an interactive workflow and data management system. The system automates job setup, data acquisition, preprocessing, and simulation on an HPC cluster. It provides animated visualization of the results on a dedicated mapping portal in the cloud as well as delivery as GIS files and Google Earth KML files. The system also serves as an extensible framework for further research, including data assimilation and applications of machine learning to initialize the simulations from satellite data.
{"title":"An Interactive Data-Driven HPC System for Forecasting Weather, Wildland Fire, and Smoke","authors":"J. Mandel, M. Vejmelka, A. Kochanski, A. Farguell, J. Haley, D. Mallia, K. Hilburn","doi":"10.1109/UrgentHPC49580.2019.00010","DOIUrl":"https://doi.org/10.1109/UrgentHPC49580.2019.00010","url":null,"abstract":"We present an interactive HPC framework for coupled fire and weather simulations. The system is suitable for urgent simulations and forecast of wildfire propagation and smoke. It does not require expert knowledge to set up and run the forecasts. The core of the system is a coupled weather, wildland fire, fuel moisture, and smoke model, running in an interactive workflow and data management system. The system automates job setup, data acquisition, preprocessing, and simulation on an HPC cluster. It provides animated visualization of the results on a dedicated mapping portal in the cloud as well as delivery as GIS files and Google Earth KML files. The system also serves as an extensible framework for further research, including data assimilation and applications of machine learning to initialize the simulations from satellite data.","PeriodicalId":6723,"journal":{"name":"2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC)","volume":"18 1","pages":"35-44"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87861303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/UrgentHPC49580.2019.00008
Brandon Posey, Ada E. Deer, Wyatt Gorman, Vanessa July, Neeraj K. Kanhere, D. Speck, Boyd Wilson, A. Apon
In this paper we describe how high performance computing in the Google Cloud Platform can be utilized in an urgent and emergency situation to process large amounts of traffic data efficiently and on demand. Our approach provides a solution to an urgent need for disaster management using massive data processing and high performance computing. The traffic data used in this demonstration is collected from the public camera systems on Interstate highways in the Southeast United States. Our solution launches a parallel processing system that is the size of a Top 5 supercomputer using the Google Cloud Platform. Results show that the parallel processing system can be launched in a few hours, that it is effective at fast processing of high volume data, and can be de-provisioned in a few hours. We processed 211TB of video utilizing 6,227,593 core hours over the span of about eight hours with an average cost of around $0.008 per vCPU hour, which is less than the cost of many on-premise HPC systems.
{"title":"On-Demand Urgent High Performance Computing Utilizing the Google Cloud Platform","authors":"Brandon Posey, Ada E. Deer, Wyatt Gorman, Vanessa July, Neeraj K. Kanhere, D. Speck, Boyd Wilson, A. Apon","doi":"10.1109/UrgentHPC49580.2019.00008","DOIUrl":"https://doi.org/10.1109/UrgentHPC49580.2019.00008","url":null,"abstract":"In this paper we describe how high performance computing in the Google Cloud Platform can be utilized in an urgent and emergency situation to process large amounts of traffic data efficiently and on demand. Our approach provides a solution to an urgent need for disaster management using massive data processing and high performance computing. The traffic data used in this demonstration is collected from the public camera systems on Interstate highways in the Southeast United States. Our solution launches a parallel processing system that is the size of a Top 5 supercomputer using the Google Cloud Platform. Results show that the parallel processing system can be launched in a few hours, that it is effective at fast processing of high volume data, and can be de-provisioned in a few hours. We processed 211TB of video utilizing 6,227,593 core hours over the span of about eight hours with an average cost of around $0.008 per vCPU hour, which is less than the cost of many on-premise HPC systems.","PeriodicalId":6723,"journal":{"name":"2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC)","volume":"43 1","pages":"13-23"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79919513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/UrgentHPC49580.2019.00009
G. Gibb, R. Nash, Nick Brown, Bianca Prodan
The use of High Performance Computing (HPC) to compliment urgent decision making in the event of disasters is an important future potential use of supercomputers. However, the usage modes involved are rather different from how HPC has been used traditionally. As such, there are many obstacles that need to be overcome, not least the unbounded wait times in the batch system queues, to make the use of HPC in disaster response practical. In this paper, we present how the VESTEC project plans to overcome these issues and develop a working prototype of an urgent computing control system. We describe the requirements for such a system and analyse the different technologies available that can be leveraged to successfully build such a system. We finally explore the design of the VESTEC system and discuss ongoing challenges that need to be addressed to realise a production level system.
{"title":"The Technologies Required for Fusing HPC and Real-Time Data to Support Urgent Computing","authors":"G. Gibb, R. Nash, Nick Brown, Bianca Prodan","doi":"10.1109/UrgentHPC49580.2019.00009","DOIUrl":"https://doi.org/10.1109/UrgentHPC49580.2019.00009","url":null,"abstract":"The use of High Performance Computing (HPC) to compliment urgent decision making in the event of disasters is an important future potential use of supercomputers. However, the usage modes involved are rather different from how HPC has been used traditionally. As such, there are many obstacles that need to be overcome, not least the unbounded wait times in the batch system queues, to make the use of HPC in disaster response practical. In this paper, we present how the VESTEC project plans to overcome these issues and develop a working prototype of an urgent computing control system. We describe the requirements for such a system and analyse the different technologies available that can be leveraged to successfully build such a system. We finally explore the design of the VESTEC system and discuss ongoing challenges that need to be addressed to realise a production level system.","PeriodicalId":6723,"journal":{"name":"2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC)","volume":"12 1","pages":"24-34"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91238392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/UrgentHPC49580.2019.00011
F. Løvholt, S. Lorito, Jorge Macías Sánchez, M. Volpe, J. Selva, S. Gibbons
Tsunamis pose a hazard that may strike a coastal population within a short amount of time. To effectively forecast and warn for tsunamis, extremely fast simulations are needed. However, until recently such urgent tsunami simulations have been infeasible in the context of early warning and even for high-resolution rapid post-event assessment. The implementation of efficient tsunami numerical codes using Graphical Processing Units (GPUs) has now allowed much faster simulations, which have opened a new avenue for carrying out simulations Faster Than Real Time (FTRT). This paper discusses the need for urgent computing in computational tsunami science, and presents workflows for two applications, namely FTRT itself and Probabilistic Tsunami Forecasting (PTF). PTF relies on a very high number of FTRT simulations addressing forecasting uncertainty, whose full quantification will be made more and more at reach with the advent of exascale computing resources.
{"title":"Urgent Tsunami Computing","authors":"F. Løvholt, S. Lorito, Jorge Macías Sánchez, M. Volpe, J. Selva, S. Gibbons","doi":"10.1109/UrgentHPC49580.2019.00011","DOIUrl":"https://doi.org/10.1109/UrgentHPC49580.2019.00011","url":null,"abstract":"Tsunamis pose a hazard that may strike a coastal population within a short amount of time. To effectively forecast and warn for tsunamis, extremely fast simulations are needed. However, until recently such urgent tsunami simulations have been infeasible in the context of early warning and even for high-resolution rapid post-event assessment. The implementation of efficient tsunami numerical codes using Graphical Processing Units (GPUs) has now allowed much faster simulations, which have opened a new avenue for carrying out simulations Faster Than Real Time (FTRT). This paper discusses the need for urgent computing in computational tsunami science, and presents workflows for two applications, namely FTRT itself and Probabilistic Tsunami Forecasting (PTF). PTF relies on a very high number of FTRT simulations addressing forecasting uncertainty, whose full quantification will be made more and more at reach with the advent of exascale computing resources.","PeriodicalId":6723,"journal":{"name":"2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC)","volume":"19 1","pages":"45-50"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77969546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/UrgentHPC49580.2019.00006
A. Fanfarillo
Fast and accurate location and quantification of a dangerous chemical, biological or radiological release plays a significant role in evaluating emergency situations and their consequences. Thanks to the advent of Deep Learning frameworks (e.g. Tensorflow) and new specialized hardware (e.g. Tensor Cores), the excellent fitting ability of Artificial Neural Networks (ANN) has been used by several researchers to model atmospheric dispersion. Despite the high accuracy and fast prediction, regular ANNs do not provide any information about the uncertainty of the prediction. Such uncertainty can be the result of a combination of measurement noise and model architecture. In an urgent decision making situation, the ability to provide fast prediction along with a quantification of the uncertainty is of paramount importance. In this work, a Probabilistic Deep Learning model for source term estimation is presented, using the Tensorflow Probability framework.
{"title":"Quantifying Uncertainty in Source Term Estimation with Tensorflow Probability","authors":"A. Fanfarillo","doi":"10.1109/UrgentHPC49580.2019.00006","DOIUrl":"https://doi.org/10.1109/UrgentHPC49580.2019.00006","url":null,"abstract":"Fast and accurate location and quantification of a dangerous chemical, biological or radiological release plays a significant role in evaluating emergency situations and their consequences. Thanks to the advent of Deep Learning frameworks (e.g. Tensorflow) and new specialized hardware (e.g. Tensor Cores), the excellent fitting ability of Artificial Neural Networks (ANN) has been used by several researchers to model atmospheric dispersion. Despite the high accuracy and fast prediction, regular ANNs do not provide any information about the uncertainty of the prediction. Such uncertainty can be the result of a combination of measurement noise and model architecture. In an urgent decision making situation, the ability to provide fast prediction along with a quantification of the uncertainty is of paramount importance. In this work, a Probabilistic Deep Learning model for source term estimation is presented, using the Tensorflow Probability framework.","PeriodicalId":6723,"journal":{"name":"2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC)","volume":"38 ","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91519501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-17DOI: 10.1109/UrgentHPC49580.2019.00007
Max Kontak, Jules Vidal, Julien Tierny
In urgent decision making applications, ensemble simulations are an important way to determine different outcome scenarios based on currently available data. In this paper, we will analyze the output of ensemble simulations by considering socalled persistence diagrams, which are reduced representations of the original data, motivated by the extraction of topological features. Based on a recently published progressive algorithm for the clustering of persistence diagrams, we determine the optimal number of clusters, and therefore the number of significantly different outcome scenarios, by the minimization of established statistical score functions. Furthermore, we present a proof-ofconcept prototype implementation of the statistical selection of the number of clusters and provide the results of an experimental study, where this implementation has been applied to real-world ensemble data sets.
{"title":"Statistical Parameter Selection for Clustering Persistence Diagrams","authors":"Max Kontak, Jules Vidal, Julien Tierny","doi":"10.1109/UrgentHPC49580.2019.00007","DOIUrl":"https://doi.org/10.1109/UrgentHPC49580.2019.00007","url":null,"abstract":"In urgent decision making applications, ensemble simulations are an important way to determine different outcome scenarios based on currently available data. In this paper, we will analyze the output of ensemble simulations by considering socalled persistence diagrams, which are reduced representations of the original data, motivated by the extraction of topological features. Based on a recently published progressive algorithm for the clustering of persistence diagrams, we determine the optimal number of clusters, and therefore the number of significantly different outcome scenarios, by the minimization of established statistical score functions. Furthermore, we present a proof-ofconcept prototype implementation of the statistical selection of the number of clusters and provide the results of an experimental study, where this implementation has been applied to real-world ensemble data sets.","PeriodicalId":6723,"journal":{"name":"2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC)","volume":"9 1","pages":"7-12"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90368630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/SERVICES-2.2008.38
Yanbo Han
Users and developers of scientific and engineering applications target a wide range of diverse computing platforms: from laptops and workstations, to clusters with homogeneous or heterogeneous node architectures, to the largest and most powerful supercomputers on the planet. This diversity presents the high performance computing (HPC) community with the challenge of developing software using techniques that enable software portability without unduly compromising performance or significantly impacting productivity. Although there have been some partial successes in addressing this challenge, much work remains before the community can truly claim to have productive performance portability techniques (P3).
{"title":"Message from the Workshop Chair","authors":"Yanbo Han","doi":"10.1109/SERVICES-2.2008.38","DOIUrl":"https://doi.org/10.1109/SERVICES-2.2008.38","url":null,"abstract":"Users and developers of scientific and engineering applications target a wide range of diverse computing platforms: from laptops and workstations, to clusters with homogeneous or heterogeneous node architectures, to the largest and most powerful supercomputers on the planet. This diversity presents the high performance computing (HPC) community with the challenge of developing software using techniques that enable software portability without unduly compromising performance or significantly impacting productivity. Although there have been some partial successes in addressing this challenge, much work remains before the community can truly claim to have productive performance portability techniques (P3).","PeriodicalId":6723,"journal":{"name":"2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC)","volume":"429 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74387131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}