Jun-Young Lee, Ji-hoon Kim, Minyong Jung, Boon Kiat Oh, Yongseok Jo, Songyoun Park, Jaehyun Lee, Yuan-Sen Ting, Ho Seong Hwang
{"title":"Inferring Cosmological Parameters on SDSS via Domain-Generalized Neural Networks and Lightcone Simulations","authors":"Jun-Young Lee, Ji-hoon Kim, Minyong Jung, Boon Kiat Oh, Yongseok Jo, Songyoun Park, Jaehyun Lee, Yuan-Sen Ting, Ho Seong Hwang","doi":"arxiv-2409.02256","DOIUrl":null,"url":null,"abstract":"We present a proof-of-concept simulation-based inference on $\\Omega_{\\rm m}$\nand $\\sigma_{8}$ from the SDSS BOSS LOWZ NGC catalog using neural networks and\ndomain generalization techniques without the need of summary statistics. Using\nrapid lightcone simulations, ${\\rm L{\\scriptsize -PICOLA}}$, mock galaxy\ncatalogs are produced that fully incorporate the observational effects. The\ncollection of galaxies is fed as input to a point cloud-based network,\n${\\texttt{Minkowski-PointNet}}$. We also add relatively more accurate ${\\rm\nG{\\scriptsize ADGET}}$ mocks to obtain robust and generalizable neural\nnetworks. By explicitly learning the representations which reduces the\ndiscrepancies between the two different datasets via the semantic alignment\nloss term, we show that the latent space configuration aligns into a single\nplane in which the two cosmological parameters form clear axes. Consequently,\nduring inference, the SDSS BOSS LOWZ NGC catalog maps onto the plane,\ndemonstrating effective generalization and improving prediction accuracy\ncompared to non-generalized models. Results from the ensemble of 25\nindependently trained machines find $\\Omega_{\\rm m}=0.339 \\pm 0.056$ and\n$\\sigma_{8}=0.801 \\pm 0.061$, inferred only from the distribution of galaxies\nin the lightcone slices without relying on any indirect summary statistics. A\nsingle machine that best adapts to the ${\\rm G{\\scriptsize ADGET}}$ mocks\nyields a tighter prediction of $\\Omega_{\\rm m}=0.282 \\pm 0.014$ and\n$\\sigma_{8}=0.786 \\pm 0.036$. We emphasize that adaptation across multiple\ndomains can enhance the robustness of the neural networks in observational\ndata.","PeriodicalId":501207,"journal":{"name":"arXiv - PHYS - Cosmology and Nongalactic Astrophysics","volume":"75 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Cosmology and Nongalactic Astrophysics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.02256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We present a proof-of-concept simulation-based inference on $\Omega_{\rm m}$
and $\sigma_{8}$ from the SDSS BOSS LOWZ NGC catalog using neural networks and
domain generalization techniques without the need of summary statistics. Using
rapid lightcone simulations, ${\rm L{\scriptsize -PICOLA}}$, mock galaxy
catalogs are produced that fully incorporate the observational effects. The
collection of galaxies is fed as input to a point cloud-based network,
${\texttt{Minkowski-PointNet}}$. We also add relatively more accurate ${\rm
G{\scriptsize ADGET}}$ mocks to obtain robust and generalizable neural
networks. By explicitly learning the representations which reduces the
discrepancies between the two different datasets via the semantic alignment
loss term, we show that the latent space configuration aligns into a single
plane in which the two cosmological parameters form clear axes. Consequently,
during inference, the SDSS BOSS LOWZ NGC catalog maps onto the plane,
demonstrating effective generalization and improving prediction accuracy
compared to non-generalized models. Results from the ensemble of 25
independently trained machines find $\Omega_{\rm m}=0.339 \pm 0.056$ and
$\sigma_{8}=0.801 \pm 0.061$, inferred only from the distribution of galaxies
in the lightcone slices without relying on any indirect summary statistics. A
single machine that best adapts to the ${\rm G{\scriptsize ADGET}}$ mocks
yields a tighter prediction of $\Omega_{\rm m}=0.282 \pm 0.014$ and
$\sigma_{8}=0.786 \pm 0.036$. We emphasize that adaptation across multiple
domains can enhance the robustness of the neural networks in observational
data.