Pub Date : 2020-12-02DOI: 10.1186/s40668-020-00034-6
Miles L. Timpe, Maria Han Veiga, Mischa Knabenhans, Joachim Stadel, Stefano Marelli
In the late stages of terrestrial planet formation, pairwise collisions between planetary-sized bodies act as the fundamental agent of planet growth. These collisions can lead to either growth or disruption of the bodies involved and are largely responsible for shaping the final characteristics of the planets. Despite their critical role in planet formation, an accurate treatment of collisions has yet to be realized. While semi-analytic methods have been proposed, they remain limited to a narrow set of post-impact properties and have only achieved relatively low accuracies. However, the rise of machine learning and access to increased computing power have enabled novel data-driven approaches. In this work, we show that data-driven emulation techniques are capable of classifying and predicting the outcome of collisions with high accuracy and are generalizable to any quantifiable post-impact quantity. In particular, we focus on the dataset requirements, training pipeline, and classification and regression performance for four distinct data-driven techniques from machine learning (ensemble methods and neural networks) and uncertainty quantification (Gaussian processes and polynomial chaos expansion). We compare these methods to existing analytic and semi-analytic methods. Such data-driven emulators are poised to replace the methods currently used in N-body simulations, while avoiding the cost of direct simulation. This work is based on a new set of 14,856 SPH simulations of pairwise collisions between rotating, differentiated bodies at all possible mutual orientations.
{"title":"Machine learning applied to simulations of collisions between rotating, differentiated planets","authors":"Miles L. Timpe, Maria Han Veiga, Mischa Knabenhans, Joachim Stadel, Stefano Marelli","doi":"10.1186/s40668-020-00034-6","DOIUrl":"https://doi.org/10.1186/s40668-020-00034-6","url":null,"abstract":"<p>In the late stages of terrestrial planet formation, pairwise collisions between planetary-sized bodies act as the fundamental agent of planet growth. These collisions can lead to either growth or disruption of the bodies involved and are largely responsible for shaping the final characteristics of the planets. Despite their critical role in planet formation, an accurate treatment of collisions has yet to be realized. While semi-analytic methods have been proposed, they remain limited to a narrow set of post-impact properties and have only achieved relatively low accuracies. However, the rise of machine learning and access to increased computing power have enabled novel data-driven approaches. In this work, we show that data-driven emulation techniques are capable of classifying and predicting the outcome of collisions with high accuracy and are generalizable to any quantifiable post-impact quantity. In particular, we focus on the dataset requirements, training pipeline, and classification and regression performance for four distinct data-driven techniques from machine learning (ensemble methods and neural networks) and uncertainty quantification (Gaussian processes and polynomial chaos expansion). We compare these methods to existing analytic and semi-analytic methods. Such data-driven emulators are poised to replace the methods currently used in N-body simulations, while avoiding the cost of direct simulation. This work is based on a new set of 14,856 SPH simulations of pairwise collisions between rotating, differentiated bodies at all possible mutual orientations.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"7 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-020-00034-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4073275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-27DOI: 10.1186/s40668-020-00033-7
Vladimir Florinski, Dinshaw S. Balsara, Sudip Garain, Katharine F. Gurski
Many important problems in astrophysics, space physics, and geophysics involve flows of (possibly ionized) gases in the vicinity of a spherical object, such as a star or planet. The geometry of such a system naturally favors numerical schemes based on a spherical mesh. Despite its orthogonality property, the polar (latitude-longitude) mesh is ill suited for computation because of the singularity on the polar axis, leading to a highly non-uniform distribution of zone sizes. The consequences are (a)?loss of accuracy due to large variations in zone aspect ratios, and (b)?poor computational efficiency from a severe limitations on the time stepping. Geodesic meshes, based on a central projection using a Platonic solid as a template, solve the anisotropy problem, but increase the complexity of the resulting computer code. We describe a new finite volume implementation of Euler and MHD systems of equations on a triangular geodesic mesh (TGM) that is accurate up to fourth order in space and time and conserves the divergence of magnetic field to machine precision. The paper discusses in detail the generation of a TGM, the domain decomposition techniques, three-dimensional conservative reconstruction, and time stepping.
{"title":"Technologies for supporting high-order geodesic mesh frameworks for computational astrophysics and space sciences","authors":"Vladimir Florinski, Dinshaw S. Balsara, Sudip Garain, Katharine F. Gurski","doi":"10.1186/s40668-020-00033-7","DOIUrl":"https://doi.org/10.1186/s40668-020-00033-7","url":null,"abstract":"<p>Many important problems in astrophysics, space physics, and geophysics involve flows of (possibly ionized) gases in the vicinity of a spherical object, such as a star or planet. The geometry of such a system naturally favors numerical schemes based on a spherical mesh. Despite its orthogonality property, the polar (latitude-longitude) mesh is ill suited for computation because of the singularity on the polar axis, leading to a highly non-uniform distribution of zone sizes. The consequences are (a)?loss of accuracy due to large variations in zone aspect ratios, and (b)?poor computational efficiency from a severe limitations on the time stepping. Geodesic meshes, based on a central projection using a Platonic solid as a template, solve the anisotropy problem, but increase the complexity of the resulting computer code. We describe a new finite volume implementation of Euler and MHD systems of equations on a triangular geodesic mesh (TGM) that is accurate up to fourth order in space and time and conserves the divergence of magnetic field to machine precision. The paper discusses in detail the generation of a TGM, the domain decomposition techniques, three-dimensional conservative reconstruction, and time stepping.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"7 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2020-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-020-00033-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5051694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-19DOI: 10.1186/s40668-019-0032-1
Nathanaël Perraudin, Ankit Srivastava, Aurelien Lucchi, Tomasz Kacprzak, Thomas Hofmann, Alexandre Réfrégier
Deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAs) have been demonstrated to produce images of high visual quality. However, the existing hardware on which these models are trained severely limits the size of the images that can be generated. The rapid growth of high dimensional data in many fields of science therefore poses a significant challenge for generative models. In cosmology, the large-scale, three-dimensional matter distribution, modeled with N-body simulations, plays a crucial role in understanding the evolution of structures in the universe. As these simulations are computationally very expensive, GANs have recently generated interest as a possible method to emulate these datasets, but they have been, so far, mostly limited to two dimensional data. In this work, we introduce a new benchmark for the generation of three dimensional N-body simulations, in order to stimulate new ideas in the machine learning community and move closer to the practical use of generative models in cosmology. As a first benchmark result, we propose a scalable GAN approach for training a generator of N-body three-dimensional cubes. Our technique relies on two key building blocks, (i) splitting the generation of the high-dimensional data into smaller parts, and (ii) using a multi-scale approach that efficiently captures global image features that might otherwise be lost in the splitting process. We evaluate the performance of our model for the generation of N-body samples using various statistical measures commonly used in cosmology. Our results show that the proposed model produces samples of high visual quality, although the statistical analysis reveals that capturing rare features in the data poses significant problems for the generative models. We make the data, quality evaluation routines, and the proposed GAN architecture publicly available at https://github.com/nperraud/3DcosmoGAN.
{"title":"Cosmological N-body simulations: a challenge for scalable generative models","authors":"Nathanaël Perraudin, Ankit Srivastava, Aurelien Lucchi, Tomasz Kacprzak, Thomas Hofmann, Alexandre Réfrégier","doi":"10.1186/s40668-019-0032-1","DOIUrl":"https://doi.org/10.1186/s40668-019-0032-1","url":null,"abstract":"<p>Deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAs) have been demonstrated to produce images of high visual quality. However, the existing hardware on which these models are trained severely limits the size of the images that can be generated. The rapid growth of high dimensional data in many fields of science therefore poses a significant challenge for generative models. In cosmology, the large-scale, three-dimensional matter distribution, modeled with <i>N-body simulations</i>, plays a crucial role in understanding the evolution of structures in the universe. As these simulations are computationally very expensive, GANs have recently generated interest as a possible method to emulate these datasets, but they have been, so far, mostly limited to two dimensional data. In this work, we introduce a new benchmark for the generation of three dimensional <i>N</i>-body simulations, in order to stimulate new ideas in the machine learning community and move closer to the practical use of generative models in cosmology. As a first benchmark result, we propose a scalable GAN approach for training a generator of <i>N</i>-body three-dimensional cubes. Our technique relies on two key building blocks, (i) splitting the generation of the high-dimensional data into smaller parts, and (ii) using a multi-scale approach that efficiently captures global image features that might otherwise be lost in the splitting process. We evaluate the performance of our model for the generation of <i>N</i>-body samples using various statistical measures commonly used in cosmology. Our results show that the proposed model produces samples of high visual quality, although the statistical analysis reveals that capturing rare features in the data poses significant problems for the generative models. We make the data, quality evaluation routines, and the proposed GAN architecture publicly available at https://github.com/nperraud/3DcosmoGAN.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"6 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2019-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-019-0032-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4742835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-08DOI: 10.1186/s40668-019-0031-2
Kyle B. Johnston, Rana Haber, Saida M. Caballero-Nieves, Adrian M. Peter, Véronique Petit, Matt Knote
We present the construction of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern detection algorithm. We focus on the targeted identification of eclipsing binaries that demonstrate a feature known as the O’Connell effect. Our proposed methodology maps stellar variable observations to a new representation known as distribution fields (DFs). Given this novel representation, we develop a metric learning technique directly on the DF space that is capable of specifically identifying our stars of interest. The metric is tuned on a set of labeled eclipsing binary data from the Kepler survey, targeting particular systems exhibiting the O’Connell effect. The result is a conservative selection of 124 potential targets of interest out of the Villanova Eclipsing Binary Catalog. Our framework demonstrates favorable performance on Kepler eclipsing binary data, taking a crucial step in preparing the way for large-scale data volumes from next-generation telescopes such as LSST and SKA.
{"title":"A detection metric designed for O’Connell effect eclipsing binaries","authors":"Kyle B. Johnston, Rana Haber, Saida M. Caballero-Nieves, Adrian M. Peter, Véronique Petit, Matt Knote","doi":"10.1186/s40668-019-0031-2","DOIUrl":"https://doi.org/10.1186/s40668-019-0031-2","url":null,"abstract":"<p>We present the construction of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern detection algorithm. We focus on the targeted identification of eclipsing binaries that demonstrate a feature known as the O’Connell effect. Our proposed methodology maps stellar variable observations to a new representation known as distribution fields (DFs). Given this novel representation, we develop a metric learning technique directly on the DF space that is capable of specifically identifying our stars of interest. The metric is tuned on a set of labeled eclipsing binary data from the Kepler survey, targeting particular systems exhibiting the O’Connell effect. The result is a conservative selection of 124 potential targets of interest out of the Villanova Eclipsing Binary Catalog. Our framework demonstrates favorable performance on Kepler eclipsing binary data, taking a crucial step in preparing the way for large-scale data volumes from next-generation telescopes such as LSST and SKA.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"6 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2019-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-019-0031-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4355421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-09DOI: 10.1186/s40668-019-0030-3
Asmita Bhandare, Susanne Pfalzner
Most stars form as part of a stellar group. These young stars are mostly surrounded by a disk from which potentially a planetary system might form. Both, the disk and later on the planetary system, may be affected by the cluster environment due to close fly-bys. The here presented database can be used to determine the gravitational effect of such fly-bys on non-viscous disks and planetary systems. The database contains data for fly-by scenarios spanning mass ratios between the perturber and host star from 0.3 to 50.0, periastron distances from 30 au to 1000 au, orbital inclination from 0° to 180° and angle of periastron of 0°, 45° and 90°. Thus covering a wide parameter space relevant for fly-bys in stellar clusters. The data can either be downloaded to perform one’s own diagnostics like for e.g. determining disk size, disk mass, etc. after specific encounters, obtain parameter dependencies or the different particle properties can be visualized interactively. Currently the database is restricted to fly-bys on parabolic orbits, but it will be extended to hyperbolic orbits in the future. All of the data from this extensive parameter study is now publicly available as DESTINY.
{"title":"DESTINY: Database for the Effects of STellar encounters on dIsks and plaNetary sYstems","authors":"Asmita Bhandare, Susanne Pfalzner","doi":"10.1186/s40668-019-0030-3","DOIUrl":"https://doi.org/10.1186/s40668-019-0030-3","url":null,"abstract":"<p>Most stars form as part of a stellar group. These young stars are mostly surrounded by a disk from which potentially a planetary system might form. Both, the disk and later on the planetary system, may be affected by the cluster environment due to close fly-bys. The here presented database can be used to determine the gravitational effect of such fly-bys on non-viscous disks and planetary systems. The database contains data for fly-by scenarios spanning mass ratios between the perturber and host star from 0.3 to 50.0, periastron distances from 30 au to 1000 au, orbital inclination from 0<sup>°</sup> to 180<sup>°</sup> and angle of periastron of 0<sup>°</sup>, 45<sup>°</sup> and 90<sup>°</sup>. Thus covering a wide parameter space relevant for fly-bys in stellar clusters. The data can either be downloaded to perform one’s own diagnostics like for e.g. determining disk size, disk mass, etc. after specific encounters, obtain parameter dependencies or the different particle properties can be visualized interactively. Currently the database is restricted to fly-bys on parabolic orbits, but it will be extended to hyperbolic orbits in the future. All of the data from this extensive parameter study is now publicly available as DESTINY.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"6 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-019-0030-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4407667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-14DOI: 10.1186/s40668-019-0028-x
Dylan Nelson, Volker Springel, Annalisa Pillepich, Vicente Rodriguez-Gomez, Paul Torrey, Shy Genel, Mark Vogelsberger, Ruediger Pakmor, Federico Marinacci, Rainer Weinberger, Luke Kelley, Mark Lovell, Benedikt Diemer, Lars Hernquist
We present the full public release of all data from the TNG100 and TNG300 simulations of the IllustrisTNG project. IllustrisTNG is a suite of large volume, cosmological, gravo-magnetohydrodynamical simulations run with the moving-mesh code Arepo. TNG includes a comprehensive model for galaxy formation physics, and each TNG simulation self-consistently solves for the coupled evolution of dark matter, cosmic gas, luminous stars, and supermassive black holes from early time to the present day, (z=0). Each of the flagship runs—TNG50, TNG100, and TNG300—are accompanied by halo/subhalo catalogs, merger trees, lower-resolution and dark-matter only counterparts, all available with 100 snapshots. We discuss scientific and numerical cautions and caveats relevant when using TNG.
The data volume now directly accessible online is ~750 TB, including 1200 full volume snapshots and ~80,000 high time-resolution subbox snapshots. This will increase to ~1.1 PB with the future release of TNG50. Data access and analysis examples are available in IDL, Python, and Matlab. We describe improvements and new functionality in the web-based API, including on-demand visualization and analysis of galaxies and halos, exploratory plotting of scaling relations and other relationships between galactic and halo properties, and a new JupyterLab interface. This provides an online, browser-based, near-native data analysis platform enabling user computation with local access to TNG data, alleviating the need to download large datasets.
{"title":"The IllustrisTNG simulations: public data release","authors":"Dylan Nelson, Volker Springel, Annalisa Pillepich, Vicente Rodriguez-Gomez, Paul Torrey, Shy Genel, Mark Vogelsberger, Ruediger Pakmor, Federico Marinacci, Rainer Weinberger, Luke Kelley, Mark Lovell, Benedikt Diemer, Lars Hernquist","doi":"10.1186/s40668-019-0028-x","DOIUrl":"https://doi.org/10.1186/s40668-019-0028-x","url":null,"abstract":"<p>We present the full public release of all data from the TNG100 and TNG300 simulations of the IllustrisTNG project. IllustrisTNG is a suite of large volume, cosmological, gravo-magnetohydrodynamical simulations run with the moving-mesh code <span>Arepo</span>. TNG includes a comprehensive model for galaxy formation physics, and each TNG simulation self-consistently solves for the coupled evolution of dark matter, cosmic gas, luminous stars, and supermassive black holes from early time to the present day, <span>(z=0)</span>. Each of the flagship runs—TNG50, TNG100, and TNG300—are accompanied by halo/subhalo catalogs, merger trees, lower-resolution and dark-matter only counterparts, all available with 100 snapshots. We discuss scientific and numerical cautions and caveats relevant when using TNG.</p><p>The data volume now directly accessible online is ~750 TB, including 1200 full volume snapshots and ~80,000 high time-resolution subbox snapshots. This will increase to ~1.1 PB with the future release of TNG50. Data access and analysis examples are available in IDL, Python, and Matlab. We describe improvements and new functionality in the web-based API, including on-demand visualization and analysis of galaxies and halos, exploratory plotting of scaling relations and other relationships between galactic and halo properties, and a new JupyterLab interface. This provides an online, browser-based, near-native data analysis platform enabling user computation with local access to TNG data, alleviating the need to download large datasets.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"6 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-019-0028-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4586519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-06DOI: 10.1186/s40668-019-0029-9
Mustafa Mustafa, Deborah Bard, Wahid Bhimji, Zarija Lukić, Rami Al-Rfou, Jan M. Kratochvil
Inferring model parameters from experimental data is a grand challenge in many sciences, including cosmology. This often relies critically on high fidelity numerical simulations, which are prohibitively computationally expensive. The application of deep learning techniques to generative modeling is renewing interest in using high dimensional density estimators as computationally inexpensive emulators of fully-fledged simulations. These generative models have the potential to make a dramatic shift in the field of scientific simulations, but for that shift to happen we need to study the performance of such generators in the precision regime needed for science applications. To this end, in this work we apply Generative Adversarial Networks to the problem of generating weak lensing convergence maps. We show that our generator network produces maps that are described by, with high statistical confidence, the same summary statistics as the fully simulated maps.
{"title":"CosmoGAN: creating high-fidelity weak lensing convergence maps using Generative Adversarial Networks","authors":"Mustafa Mustafa, Deborah Bard, Wahid Bhimji, Zarija Lukić, Rami Al-Rfou, Jan M. Kratochvil","doi":"10.1186/s40668-019-0029-9","DOIUrl":"https://doi.org/10.1186/s40668-019-0029-9","url":null,"abstract":"<p>Inferring model parameters from experimental data is a grand challenge in many sciences, including cosmology. This often relies critically on high fidelity numerical simulations, which are prohibitively computationally expensive. The application of deep learning techniques to generative modeling is renewing interest in using high dimensional density estimators as computationally inexpensive emulators of fully-fledged simulations. These generative models have the potential to make a dramatic shift in the field of scientific simulations, but for that shift to happen we need to study the performance of such generators in the precision regime needed for science applications. To this end, in this work we apply Generative Adversarial Networks to the problem of generating weak lensing convergence maps. We show that our generator network produces maps that are described by, with high statistical confidence, the same summary statistics as the fully simulated maps.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"6 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2019-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-019-0029-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4270013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-28DOI: 10.1186/s40668-018-0027-3
Carl L. Rodriguez, Bharath Pattabiraman, Sourav Chatterjee, Alok Choudhary, Wei-keng Liao, Meagan Morscher, Frederic A. Rasio
The “gravitational million-body problem,” to model the dynamical evolution of a self-gravitating, collisional N-body system with ~106 particles over many relaxation times, remains a major challenge in computational astrophysics. Unfortunately, current techniques to model such systems suffer from severe limitations. A direct N-body simulation with more than 105 particles can require months or even years to complete, while an orbit-sampling Monte Carlo approach cannot adequately model the dynamics in a dense cluster core, particularly in the presence of many black holes. We have developed a new technique combining the precision of a direct N-body integration with the speed of a Monte Carlo approach. Our Rapid And Precisely Integrated Dynamics code, the RAPID code, statistically models interactions between neighboring stars and stellar binaries while integrating directly the orbits of stars or black holes in the cluster core. This allows us to accurately simulate the dynamics of the black holes in a realistic globular cluster environment without the burdensome (N^{2}) scaling of a full N-body integration. We compare RAPID models of idealized globular clusters to identical models from the direct N-body and Monte Carlo methods. Our tests show that RAPID can reproduce the half-mass radii, core radii, black hole ejection rates, and binary properties of the direct N-body models far more accurately than a standard Monte Carlo integration while remaining significantly faster than a full N-body integration. With this technique, it will be possible to create more realistic models of Milky Way globular clusters with sufficient rapidity to explore the full parameter space of dense stellar clusters.
{"title":"A new hybrid technique for modeling dense star clusters","authors":"Carl L. Rodriguez, Bharath Pattabiraman, Sourav Chatterjee, Alok Choudhary, Wei-keng Liao, Meagan Morscher, Frederic A. Rasio","doi":"10.1186/s40668-018-0027-3","DOIUrl":"https://doi.org/10.1186/s40668-018-0027-3","url":null,"abstract":"<p>The “gravitational million-body problem,” to model the dynamical evolution of a self-gravitating, collisional <i>N</i>-body system with ~10<sup>6</sup> particles over many relaxation times, remains a major challenge in computational astrophysics. Unfortunately, current techniques to model such systems suffer from severe limitations. A direct <i>N</i>-body simulation with more than 10<sup>5</sup> particles can require months or even years to complete, while an orbit-sampling Monte Carlo approach cannot adequately model the dynamics in a dense cluster core, particularly in the presence of many black holes. We have developed a new technique combining the precision of a direct <i>N</i>-body integration with the speed of a Monte Carlo approach. Our Rapid And Precisely Integrated Dynamics code, the <span>RAPID</span> code, statistically models interactions between neighboring stars and stellar binaries while integrating directly the orbits of stars or black holes in the cluster core. This allows us to accurately simulate the dynamics of the black holes in a realistic globular cluster environment without the burdensome <span>(N^{2})</span> scaling of a full <i>N</i>-body integration. We compare <span>RAPID</span> models of idealized globular clusters to identical models from the direct <i>N</i>-body and Monte Carlo methods. Our tests show that <span>RAPID</span> can reproduce the half-mass radii, core radii, black hole ejection rates, and binary properties of the direct <i>N</i>-body models far more accurately than a standard Monte Carlo integration while remaining significantly faster than a full <i>N</i>-body integration. With this technique, it will be possible to create more realistic models of Milky Way globular clusters with sufficient rapidity to explore the full parameter space of dense stellar clusters.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"5 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-018-0027-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5095816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-23DOI: 10.1186/s40668-018-0026-4
Andres C. Rodríguez, Tomasz Kacprzak, Aurelien Lucchi, Adam Amara, Raphaël Sgier, Janis Fluri, Thomas Hofmann, Alexandre Réfrégier
Dark matter in the universe evolves through gravity to form a complex network of halos, filaments, sheets and voids, that is known as the cosmic web. Computational models of the underlying physical processes, such as classical N-body simulations, are extremely resource intensive, as they track the action of gravity in an expanding universe using billions of particles as tracers of the cosmic matter distribution. Therefore, upcoming cosmology experiments will face a computational bottleneck that may limit the exploitation of their full scientific potential. To address this challenge, we demonstrate the application of a machine learning technique called Generative Adversarial Networks (GAN) to learn models that can efficiently generate new, physically realistic realizations of the cosmic web. Our training set is a small, representative sample of 2D image snapshots from N-body simulations of size 500 and 100 Mpc. We show that the GAN-generated samples are qualitatively and quantitatively very similar to the originals. For the larger boxes of size 500 Mpc, it is very difficult to distinguish them visually. The agreement of the power spectrum (P_{k}) is 1–2% for most of the range, between (k=0.06) and (k=0.4). For the remaining values of k, the agreement is within 15%, with the error rate increasing for (k>0.8). For smaller boxes of size 100 Mpc, we find that the visual agreement to be good, but some differences are noticable. The error on the power spectrum is of the order of 20%. We attribute this loss of performance to the fact that the matter distribution in 100 Mpc cutouts was very inhomogeneous between images, a situation in which the performance of GANs is known to deteriorate. We find a good match for the correlation matrix of full (P_{k}) range for 100 Mpc data and of small scales for 500 Mpc, with ~20% disagreement for large scales. An important advantage of generating cosmic web realizations with a GAN is the considerable gains in terms of computation time. Each new sample generated by a GAN takes a fraction of a second, compared to the many hours needed by traditional N-body techniques. We anticipate that the use of generative models such as GANs will therefore play an important role in providing extremely fast and precise simulations of cosmic web in the era of large cosmological surveys, such as Euclid and Large Synoptic Survey Telescope (LSST).
宇宙中的暗物质通过引力演化,形成了一个由光晕、细丝、薄片和空洞组成的复杂网络,这就是我们所知的宇宙网。基础物理过程的计算模型,如经典的n体模拟,是极其资源密集型的,因为它们使用数十亿粒子作为宇宙物质分布的示踪剂来跟踪膨胀宇宙中的重力作用。因此,即将到来的宇宙学实验将面临计算瓶颈,这可能会限制其充分发挥科学潜力。为了应对这一挑战,我们展示了一种称为生成对抗网络(GAN)的机器学习技术的应用,以学习能够有效地生成新的、物理上真实的宇宙网实现的模型。我们的训练集是来自大小为500和100 Mpc的n个体模拟的2D图像快照的小型代表性样本。结果表明,gan生成的样品在定性和定量上与原始样品非常相似。对于500mpc的大盒子,很难在视觉上区分它们。功率谱(P_{k})的一致性为1-2% for most of the range, between (k=0.06) and (k=0.4). For the remaining values of k, the agreement is within 15%, with the error rate increasing for (k>0.8). For smaller boxes of size 100 Mpc, we find that the visual agreement to be good, but some differences are noticable. The error on the power spectrum is of the order of 20%. We attribute this loss of performance to the fact that the matter distribution in 100 Mpc cutouts was very inhomogeneous between images, a situation in which the performance of GANs is known to deteriorate. We find a good match for the correlation matrix of full (P_{k}) range for 100 Mpc data and of small scales for 500 Mpc, with ~20% disagreement for large scales. An important advantage of generating cosmic web realizations with a GAN is the considerable gains in terms of computation time. Each new sample generated by a GAN takes a fraction of a second, compared to the many hours needed by traditional N-body techniques. We anticipate that the use of generative models such as GANs will therefore play an important role in providing extremely fast and precise simulations of cosmic web in the era of large cosmological surveys, such as Euclid and Large Synoptic Survey Telescope (LSST).
{"title":"Fast cosmic web simulations with generative adversarial networks","authors":"Andres C. Rodríguez, Tomasz Kacprzak, Aurelien Lucchi, Adam Amara, Raphaël Sgier, Janis Fluri, Thomas Hofmann, Alexandre Réfrégier","doi":"10.1186/s40668-018-0026-4","DOIUrl":"https://doi.org/10.1186/s40668-018-0026-4","url":null,"abstract":"<p>Dark matter in the universe evolves through gravity to form a complex network of halos, filaments, sheets and voids, that is known as the cosmic web. Computational models of the underlying physical processes, such as classical N-body simulations, are extremely resource intensive, as they track the action of gravity in an expanding universe using billions of particles as tracers of the cosmic matter distribution. Therefore, upcoming cosmology experiments will face a computational bottleneck that may limit the exploitation of their full scientific potential. To address this challenge, we demonstrate the application of a machine learning technique called Generative Adversarial Networks (GAN) to learn models that can efficiently generate new, physically realistic realizations of the cosmic web. Our training set is a small, representative sample of 2D image snapshots from N-body simulations of size 500 and 100 Mpc. We show that the GAN-generated samples are qualitatively and quantitatively very similar to the originals. For the larger boxes of size 500 Mpc, it is very difficult to distinguish them visually. The agreement of the power spectrum <span>(P_{k})</span> is 1–2% for most of the range, between <span>(k=0.06)</span> and <span>(k=0.4)</span>. For the remaining values of <i>k</i>, the agreement is within 15%, with the error rate increasing for <span>(k>0.8)</span>. For smaller boxes of size 100 Mpc, we find that the visual agreement to be good, but some differences are noticable. The error on the power spectrum is of the order of 20%. We attribute this loss of performance to the fact that the matter distribution in 100 Mpc cutouts was very inhomogeneous between images, a situation in which the performance of GANs is known to deteriorate. We find a good match for the correlation matrix of full <span>(P_{k})</span> range for 100 Mpc data and of small scales for 500 Mpc, with ~20% disagreement for large scales. An important advantage of generating cosmic web realizations with a GAN is the considerable gains in terms of computation time. Each new sample generated by a GAN takes a fraction of a second, compared to the many hours needed by traditional N-body techniques. We anticipate that the use of generative models such as GANs will therefore play an important role in providing extremely fast and precise simulations of cosmic web in the era of large cosmological surveys, such as Euclid and Large Synoptic Survey Telescope (LSST).</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"5 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2018-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-018-0026-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5230357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-19DOI: 10.1186/s40668-018-0023-7
Jordy Davelaar, Thomas Bronzwaer, Daniel Kok, Ziri Younsi, Monika Mościbrodzka, Heino Falcke
We present a 360° (i.e., 4π steradian) general-relativistic ray-tracing and radiative transfer calculations of accreting supermassive black holes. We perform state-of-the-art three-dimensional general-relativistic magnetohydrodynamical simulations using the BHAC code, subsequently post-processing this data with the radiative transfer code RAPTOR. All relativistic and general-relativistic effects, such as Doppler boosting and gravitational redshift, as well as geometrical effects due to the local gravitational field and the observer’s changing position and state of motion, are therefore calculated self-consistently. Synthetic images at four astronomically-relevant observing frequencies are generated from the perspective of an observer with a full 360° view inside the accretion flow, who is advected with the flow as it evolves. As an example we calculated images based on recent best-fit models of observations of Sagittarius A*. These images are combined to generate a complete 360° Virtual Reality movie of the surrounding environment of the black hole and its event horizon. Our approach also enables the calculation of the local luminosity received at a given fluid element in the accretion flow, providing important applications in, e.g., radiation feedback calculations onto black hole accretion flows. In addition to scientific applications, the 360° Virtual Reality movies we present also represent a new medium through which to interactively communicate black hole physics to a wider audience, serving as a powerful educational tool.
{"title":"Observing supermassive black holes in virtual reality","authors":"Jordy Davelaar, Thomas Bronzwaer, Daniel Kok, Ziri Younsi, Monika Mościbrodzka, Heino Falcke","doi":"10.1186/s40668-018-0023-7","DOIUrl":"https://doi.org/10.1186/s40668-018-0023-7","url":null,"abstract":"<p>We present a 360<sup>°</sup> (i.e., 4<i>π</i> steradian) general-relativistic ray-tracing and radiative transfer calculations of accreting supermassive black holes. We perform state-of-the-art three-dimensional general-relativistic magnetohydrodynamical simulations using the <span>BHAC</span> code, subsequently post-processing this data with the radiative transfer code <span>RAPTOR</span>. All relativistic and general-relativistic effects, such as Doppler boosting and gravitational redshift, as well as geometrical effects due to the local gravitational field and the observer’s changing position and state of motion, are therefore calculated self-consistently. Synthetic images at four astronomically-relevant observing frequencies are generated from the perspective of an observer with a full 360<sup>°</sup> view inside the accretion flow, who is advected with the flow as it evolves. As an example we calculated images based on recent best-fit models of observations of Sagittarius A*. These images are combined to generate a complete 360<sup>°</sup> Virtual Reality movie of the surrounding environment of the black hole and its event horizon. Our approach also enables the calculation of the local luminosity received at a given fluid element in the accretion flow, providing important applications in, e.g., radiation feedback calculations onto black hole accretion flows. In addition to scientific applications, the 360<sup>°</sup> Virtual Reality movies we present also represent a new medium through which to interactively communicate black hole physics to a wider audience, serving as a powerful educational tool.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"5 1","pages":""},"PeriodicalIF":16.281,"publicationDate":"2018-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-018-0023-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4770467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}