P. Hennebelle, M. Barsuglia, F. Billebaud, M. Bouffard, N. Champollion, M. Grybos, H. Meheut, M. Parmentier, P. Petitjean
The threat posed to humanity by global warming has led scientists to question the nature of their activities and the need to reduce the greenhouse gas emissions from research. Until now, most studies have aimed at quantifying the carbon footprints and relatively less works have addressed the ways GHG emissions can be significantly reduced. A factor two reduction by 2030 implies to think beyond increases in the efficacy of current processes, which will have a limited effect, and beyond wishful thinking about large new sources of energy. Hence, choices among research questions or allocated means within a given field will be needed. They can be made in light of the perceived societal utility of research activities. Here, we addressed the question of how scientists perceive the impact of GHG reduction on their discipline and a possible trade-off between the societal utility of their discipline and an acceptable level of GHG emissions. We conducted 28 semi-directive interviews of French astrophysicists from different laboratories. Our most important findings are that, for most researchers, astronomy is considered to have a positive societal impact mainly regarding education but also because of the fascination it exerts on at least a fraction of the general public. Technological applications are also mentioned but with relatively less emphasis. The reduction of GHG emissions is believed to be necessary and most often reductions within the private-sphere have been achieved. However, the question of community-wide reductions in astrophysics research, and in particular the possible reductions of large facilities reveals much more contrasted opinions.
{"title":"What trade-off for astronomy between greenhouse gas emissions and the societal benefits? A sociological approach","authors":"P. Hennebelle, M. Barsuglia, F. Billebaud, M. Bouffard, N. Champollion, M. Grybos, H. Meheut, M. Parmentier, P. Petitjean","doi":"arxiv-2409.04138","DOIUrl":"https://doi.org/arxiv-2409.04138","url":null,"abstract":"The threat posed to humanity by global warming has led scientists to question\u0000the nature of their activities and the need to reduce the greenhouse gas\u0000emissions from research. Until now, most studies have aimed at quantifying the\u0000carbon footprints and relatively less works have addressed the ways GHG\u0000emissions can be significantly reduced. A factor two reduction by 2030 implies\u0000to think beyond increases in the efficacy of current processes, which will have\u0000a limited effect, and beyond wishful thinking about large new sources of\u0000energy. Hence, choices among research questions or allocated means within a\u0000given field will be needed. They can be made in light of the perceived societal\u0000utility of research activities. Here, we addressed the question of how\u0000scientists perceive the impact of GHG reduction on their discipline and a\u0000possible trade-off between the societal utility of their discipline and an\u0000acceptable level of GHG emissions. We conducted 28 semi-directive interviews of\u0000French astrophysicists from different laboratories. Our most important findings\u0000are that, for most researchers, astronomy is considered to have a positive\u0000societal impact mainly regarding education but also because of the fascination\u0000it exerts on at least a fraction of the general public. Technological\u0000applications are also mentioned but with relatively less emphasis. The\u0000reduction of GHG emissions is believed to be necessary and most often\u0000reductions within the private-sphere have been achieved. However, the question\u0000of community-wide reductions in astrophysics research, and in particular the\u0000possible reductions of large facilities reveals much more contrasted opinions.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transit timing variation (TTV) provides rich information about the mass and orbital properties of exoplanets, which are often obtained by solving an inverse problem via Markov Chain Monte Carlo (MCMC). In this paper, we design a new data-driven approach, which potentially can be applied to problems that are hard to traditional MCMC methods, such as the case with only one planet transiting. Specifically, we use a deep learning approach to predict the parameters of non-transit companion for the single transit system with transit information (i.e., TTV, and Transit Duration Variation (TDV)) as input. Thanks to a newly constructed textit{Transformer}-based architecture that can extract long-range interactions from TTV sequential data, this previously difficult task can now be accomplished with high accuracy, with an overall fractional error of $sim$2% on mass and eccentricity.
{"title":"DeepTTV: Deep Learning Prediction of Hidden Exoplanet From Transit Timing Variations","authors":"Chen Chen, Lingkai Kong, Gongjie Li, Molei Tao","doi":"arxiv-2409.04557","DOIUrl":"https://doi.org/arxiv-2409.04557","url":null,"abstract":"Transit timing variation (TTV) provides rich information about the mass and\u0000orbital properties of exoplanets, which are often obtained by solving an\u0000inverse problem via Markov Chain Monte Carlo (MCMC). In this paper, we design a\u0000new data-driven approach, which potentially can be applied to problems that are\u0000hard to traditional MCMC methods, such as the case with only one planet\u0000transiting. Specifically, we use a deep learning approach to predict the\u0000parameters of non-transit companion for the single transit system with transit\u0000information (i.e., TTV, and Transit Duration Variation (TDV)) as input. Thanks\u0000to a newly constructed textit{Transformer}-based architecture that can extract\u0000long-range interactions from TTV sequential data, this previously difficult\u0000task can now be accomplished with high accuracy, with an overall fractional\u0000error of $sim$2% on mass and eccentricity.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"181 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vincent Chambouleyron, Mahawa Cissé, Maïssa Salama, Sebastiaan Haffert, Vincent Déo, Charlotte Guthery, J. Kent Wallace, Daren Dillon, Rebecca Jensen-Clem, Phil Hinz, Bruce Macintosh
The Zernike wavefront sensor (ZWFS) stands out as one of the most sensitive optical systems for measuring the phase of an incoming wavefront, reaching photon efficiencies close to the fundamental limit. This quality, combined with the fact that it can easily measure phase discontinuities, has led to its widespread adoption in various wavefront control applications, both on the ground but also for future space-based instruments. Despite its advantages, the ZWFS faces a significant challenge due to its extremely limited dynamic range, making it particularly challenging for ground-based operations. To address this limitation, one approach is to use the ZWFS after a general adaptive optics (AO) system; however, even in this scenario, the dynamic range remains a concern. This paper investigates two optical configurations of the ZWFS: the conventional setup and its phase-shifted counterpart, which generates two distinct images of the telescope pupil. We assess the performance of various reconstruction techniques for both configurations, spanning from traditional linear reconstructors to gradient-descent-based methods. The evaluation encompasses simulations and experimental tests conducted on the Santa cruz Extreme Adaptive optics Lab (SEAL) bench at UCSC. Our findings demonstrate that certain innovative reconstruction techniques introduced in this study significantly enhance the dynamic range of the ZWFS, particularly when utilizing the phase-shifted version.
{"title":"Reconstruction methods for the phase-shifted Zernike wavefront sensor","authors":"Vincent Chambouleyron, Mahawa Cissé, Maïssa Salama, Sebastiaan Haffert, Vincent Déo, Charlotte Guthery, J. Kent Wallace, Daren Dillon, Rebecca Jensen-Clem, Phil Hinz, Bruce Macintosh","doi":"arxiv-2409.04547","DOIUrl":"https://doi.org/arxiv-2409.04547","url":null,"abstract":"The Zernike wavefront sensor (ZWFS) stands out as one of the most sensitive\u0000optical systems for measuring the phase of an incoming wavefront, reaching\u0000photon efficiencies close to the fundamental limit. This quality, combined with\u0000the fact that it can easily measure phase discontinuities, has led to its\u0000widespread adoption in various wavefront control applications, both on the\u0000ground but also for future space-based instruments. Despite its advantages, the\u0000ZWFS faces a significant challenge due to its extremely limited dynamic range,\u0000making it particularly challenging for ground-based operations. To address this\u0000limitation, one approach is to use the ZWFS after a general adaptive optics\u0000(AO) system; however, even in this scenario, the dynamic range remains a\u0000concern. This paper investigates two optical configurations of the ZWFS: the\u0000conventional setup and its phase-shifted counterpart, which generates two\u0000distinct images of the telescope pupil. We assess the performance of various\u0000reconstruction techniques for both configurations, spanning from traditional\u0000linear reconstructors to gradient-descent-based methods. The evaluation\u0000encompasses simulations and experimental tests conducted on the Santa cruz\u0000Extreme Adaptive optics Lab (SEAL) bench at UCSC. Our findings demonstrate that\u0000certain innovative reconstruction techniques introduced in this study\u0000significantly enhance the dynamic range of the ZWFS, particularly when\u0000utilizing the phase-shifted version.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The carbon footprint of astronomical research is an increasingly topical issue. From a comparison of existing literature, we infer an annual per capita carbon footprint of several tens of tonnes of CO$_2$ equivalents for an average person working in astronomy. Astronomical observatories contribute significantly to the carbon footprint of astronomy, and we examine the related sources of greenhouse gas emissions as well as lever arms for their reduction. Comparison with other scientific domains illustrates that astronomy is not the only field that needs to accomplish significant carbon footprint reductions of their research facilities. We show that limiting global warming to 1.5{deg}C or 2{deg}C implies greenhouse gas emission reductions that can only be reached by a systemic change of astronomical research activities, and we argue that a new narrative for doing astronomical research is needed if we want to keep our planet habitable.
{"title":"The carbon footprint of astronomical observatories","authors":"Jürgen Knödlseder","doi":"arxiv-2409.04054","DOIUrl":"https://doi.org/arxiv-2409.04054","url":null,"abstract":"The carbon footprint of astronomical research is an increasingly topical\u0000issue. From a comparison of existing literature, we infer an annual per capita\u0000carbon footprint of several tens of tonnes of CO$_2$ equivalents for an average\u0000person working in astronomy. Astronomical observatories contribute\u0000significantly to the carbon footprint of astronomy, and we examine the related\u0000sources of greenhouse gas emissions as well as lever arms for their reduction.\u0000Comparison with other scientific domains illustrates that astronomy is not the\u0000only field that needs to accomplish significant carbon footprint reductions of\u0000their research facilities. We show that limiting global warming to 1.5{deg}C\u0000or 2{deg}C implies greenhouse gas emission reductions that can only be reached\u0000by a systemic change of astronomical research activities, and we argue that a\u0000new narrative for doing astronomical research is needed if we want to keep our\u0000planet habitable.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Solvay Blomquist, Heejoo Choi, Hyukmo Kang, Kevin Derby, Pierre Nicolas, Ewan S. Douglas, Daewook Kim
When a telescope doesn't reach a reasonable point spread function on the detector or detectable wavefront quality after initial assembly, a coarse phase alignment on-sky is crucial. Before utilizing a closed loop adaptive optics system, the observatory needs a strategy to actively align the telescope sufficiently for fine wavefront sensing. This paper presents a method of early-stage alignment using a stochastic parallel-gradient-descent (SPGD) algorithm which performs random perturbations to the optics of a three mirror anastigmat telescope design. The SPGD algorithm will drive the telescope until the wavefront error is below the acceptable range of the fine adaptive optics system to hand the telescope over. The focused spot size over the field of view is adopted as a feed parameter to the SPGD algorithm and wavefront peak-to-valley error values are monitored to directly compare our mechanical capabilities to our alignment goal of diffraction limited imaging and fine wavefront sensing.
{"title":"Alignment of three mirror anastigmat telescopes using a multilayered stochastic parallel gradient descent algorithm","authors":"Solvay Blomquist, Heejoo Choi, Hyukmo Kang, Kevin Derby, Pierre Nicolas, Ewan S. Douglas, Daewook Kim","doi":"arxiv-2409.04640","DOIUrl":"https://doi.org/arxiv-2409.04640","url":null,"abstract":"When a telescope doesn't reach a reasonable point spread function on the\u0000detector or detectable wavefront quality after initial assembly, a coarse phase\u0000alignment on-sky is crucial. Before utilizing a closed loop adaptive optics\u0000system, the observatory needs a strategy to actively align the telescope\u0000sufficiently for fine wavefront sensing. This paper presents a method of\u0000early-stage alignment using a stochastic parallel-gradient-descent (SPGD)\u0000algorithm which performs random perturbations to the optics of a three mirror\u0000anastigmat telescope design. The SPGD algorithm will drive the telescope until\u0000the wavefront error is below the acceptable range of the fine adaptive optics\u0000system to hand the telescope over. The focused spot size over the field of view\u0000is adopted as a feed parameter to the SPGD algorithm and wavefront\u0000peak-to-valley error values are monitored to directly compare our mechanical\u0000capabilities to our alignment goal of diffraction limited imaging and fine\u0000wavefront sensing.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The null hypothesis in Pulsar Timing Array (PTA) analyses includes assumptions about ensemble properties of pulsar time-correlated noise. These properties are encoded in prior probabilities for the amplitude and the spectral index of the power-law power spectral density of temporal correlations of the noise. In this work, we introduce a new procedure for numerical marginalisation over the uncertainties in pulsar noise priors. The procedure may be used in searches for nanohertz gravitational waves and other PTA analyses to resolve prior misspecification at negligible computational cost. Furthermore, we infer the distribution of amplitudes and spectral indices of the power spectral density of spin noise and dispersion measure variation noise based on the observation of 25 millisecond pulsars by the European Pulsar Timing Array (EPTA). Our results may be used for the simulation of realistic noise in PTAs.
{"title":"Ensemble noise properties of the European Pulsar Timing Array","authors":"Boris Goncharov, Shubhit Sardana","doi":"arxiv-2409.03661","DOIUrl":"https://doi.org/arxiv-2409.03661","url":null,"abstract":"The null hypothesis in Pulsar Timing Array (PTA) analyses includes\u0000assumptions about ensemble properties of pulsar time-correlated noise. These\u0000properties are encoded in prior probabilities for the amplitude and the\u0000spectral index of the power-law power spectral density of temporal correlations\u0000of the noise. In this work, we introduce a new procedure for numerical\u0000marginalisation over the uncertainties in pulsar noise priors. The procedure\u0000may be used in searches for nanohertz gravitational waves and other PTA\u0000analyses to resolve prior misspecification at negligible computational cost.\u0000Furthermore, we infer the distribution of amplitudes and spectral indices of\u0000the power spectral density of spin noise and dispersion measure variation noise\u0000based on the observation of 25 millisecond pulsars by the European Pulsar\u0000Timing Array (EPTA). Our results may be used for the simulation of realistic\u0000noise in PTAs.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most domains of science are experiencing a paradigm shift due to the advent of a new generation of instruments and detectors which produce data and data streams at an unprecedented rate. The scientific exploitation of these data, namely Data Driven Discovery, requires interoperability, massive and optimal use of Artificial Intelligence methods in all steps of the data acquisition, processing and analysis, the access to large and distributed computing HPC facilities, the implementation and access to large simulations and interdisciplinary skills that usually are not provided by standard academic curricula. Furthermore, to cope with this data deluge, most communities have leveraged solutions and tools originally developed by large corporations for purposes other than scientific research and accepted compromises to adapt them to their specific needs. Through the presentation of several astrophysical use cases, we show how the Data Driven based solutions could represent the optimal playground to achieve the multi-disciplinary methodological approach.
{"title":"Strengthening leverage of Astroinformatics in inter-disciplinary Science","authors":"Massimo Brescia, Giuseppe Angora","doi":"arxiv-2409.03425","DOIUrl":"https://doi.org/arxiv-2409.03425","url":null,"abstract":"Most domains of science are experiencing a paradigm shift due to the advent\u0000of a new generation of instruments and detectors which produce data and data\u0000streams at an unprecedented rate. The scientific exploitation of these data,\u0000namely Data Driven Discovery, requires interoperability, massive and optimal\u0000use of Artificial Intelligence methods in all steps of the data acquisition,\u0000processing and analysis, the access to large and distributed computing HPC\u0000facilities, the implementation and access to large simulations and\u0000interdisciplinary skills that usually are not provided by standard academic\u0000curricula. Furthermore, to cope with this data deluge, most communities have\u0000leveraged solutions and tools originally developed by large corporations for\u0000purposes other than scientific research and accepted compromises to adapt them\u0000to their specific needs. Through the presentation of several astrophysical use\u0000cases, we show how the Data Driven based solutions could represent the optimal\u0000playground to achieve the multi-disciplinary methodological approach.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. G. Vivien, M. Deleuil, N. Jannsen, J. De Ridder, D. Seynaeve, M. -A. Carpine, Y. Zerah
To prepare for the analyses of the future PLATO light curves, we develop a deep learning model, Panopticon, to detect transits in high precision photometric light curves. Since PLATO's main objective is the detection of temperate Earth-size planets around solar-type stars, the code is designed to detect individual transit events. The filtering step, required by conventional detection methods, can affect the transit, which could be an issue for long and shallow transits. To protect transit shape and depth, the code is also designed to work on unfiltered light curves. We trained the model on a set of simulated PLATO light curves in which we injected, at pixel level, either planetary, eclipsing binary, or background eclipsing binary signals. We also include a variety of noises in our data, such as granulation, stellar spots or cosmic rays. The approach is able to recover 90% of our test population, including more than 25% of the Earth-analogs, even in the unfiltered light curves. The model also recovers the transits irrespective of the orbital period, and is able to retrieve transits on a unique event basis. These figures are obtained when accepting a false alarm rate of 1%. When keeping the false alarm rate low (<0.01%), it is still able to recover more than 85% of the transit signals. Any transit deeper than 180ppm is essentially guaranteed to be recovered. This method is able to recover transits on a unique event basis, and does so with a low false alarm rate. Thanks to light curves being one-dimensional, model training is fast, on the order of a few hours per model. This speed in training and inference, coupled to the recovery effectiveness and precision of the model make it an ideal tool to complement, or be used ahead of, classical approaches.
{"title":"Panopticon: a novel deep learning model to detect single transit events with no prior data filtering in PLATO light curves","authors":"H. G. Vivien, M. Deleuil, N. Jannsen, J. De Ridder, D. Seynaeve, M. -A. Carpine, Y. Zerah","doi":"arxiv-2409.03466","DOIUrl":"https://doi.org/arxiv-2409.03466","url":null,"abstract":"To prepare for the analyses of the future PLATO light curves, we develop a\u0000deep learning model, Panopticon, to detect transits in high precision\u0000photometric light curves. Since PLATO's main objective is the detection of\u0000temperate Earth-size planets around solar-type stars, the code is designed to\u0000detect individual transit events. The filtering step, required by conventional\u0000detection methods, can affect the transit, which could be an issue for long and\u0000shallow transits. To protect transit shape and depth, the code is also designed\u0000to work on unfiltered light curves. We trained the model on a set of simulated\u0000PLATO light curves in which we injected, at pixel level, either planetary,\u0000eclipsing binary, or background eclipsing binary signals. We also include a\u0000variety of noises in our data, such as granulation, stellar spots or cosmic\u0000rays. The approach is able to recover 90% of our test population, including\u0000more than 25% of the Earth-analogs, even in the unfiltered light curves. The\u0000model also recovers the transits irrespective of the orbital period, and is\u0000able to retrieve transits on a unique event basis. These figures are obtained\u0000when accepting a false alarm rate of 1%. When keeping the false alarm rate low\u0000(<0.01%), it is still able to recover more than 85% of the transit signals. Any\u0000transit deeper than 180ppm is essentially guaranteed to be recovered. This\u0000method is able to recover transits on a unique event basis, and does so with a\u0000low false alarm rate. Thanks to light curves being one-dimensional, model\u0000training is fast, on the order of a few hours per model. This speed in training\u0000and inference, coupled to the recovery effectiveness and precision of the model\u0000make it an ideal tool to complement, or be used ahead of, classical approaches.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chuneeta D. Nunhokee, Dev Null, Cathryn M. Trott, Christopher H. Jordan, Jack B. Line, Randall Wayth, Nichole Barry
Observations of the 21 cm signal face significant challenges due to bright astrophysical foregrounds that are several orders of magnitude higher than the brightness of the hydrogen line, along with various systematics. Successful 21 cm experiments require accurate calibration and foreground mitigation. Errors introduced during the calibration process such as systematics, can disrupt the intrinsic frequency smoothness of the foregrounds, leading to power leakage into the Epoch of Reionisation (EoR) window. Therefore, it is essential to develop strategies to effectively address these challenges. In this work, we adopt a stringent approach to identify and address suspected systematics, including malfunctioning antennas, frequency channels corrupted by radio frequency interference (RFI), and other dominant effects. We implement a statistical framework that utilises various data products from the data processing pipeline to derive specific criteria and filters. These criteria and filters are applied at intermediate stages to mitigate systematic propagation from the early stages of data processing. Our analysis focuses on observations from the Murchison Widefield Array (MWA) Phase I configuration. Out of the observations processed by the pipeline, our approach selects 18%, totalling 58 hours, that exhibit fewer systematic effects. The successful selection of observations with reduced systematic dominance enhances our confidence in achieving 21 cm measurements.
{"title":"Strategy for mitigation of systematics for EoR experiments with the Murchison Widefield Array","authors":"Chuneeta D. Nunhokee, Dev Null, Cathryn M. Trott, Christopher H. Jordan, Jack B. Line, Randall Wayth, Nichole Barry","doi":"arxiv-2409.03232","DOIUrl":"https://doi.org/arxiv-2409.03232","url":null,"abstract":"Observations of the 21 cm signal face significant challenges due to bright\u0000astrophysical foregrounds that are several orders of magnitude higher than the\u0000brightness of the hydrogen line, along with various systematics. Successful 21\u0000cm experiments require accurate calibration and foreground mitigation. Errors\u0000introduced during the calibration process such as systematics, can disrupt the\u0000intrinsic frequency smoothness of the foregrounds, leading to power leakage\u0000into the Epoch of Reionisation (EoR) window. Therefore, it is essential to\u0000develop strategies to effectively address these challenges. In this work, we\u0000adopt a stringent approach to identify and address suspected systematics,\u0000including malfunctioning antennas, frequency channels corrupted by radio\u0000frequency interference (RFI), and other dominant effects. We implement a\u0000statistical framework that utilises various data products from the data\u0000processing pipeline to derive specific criteria and filters. These criteria and\u0000filters are applied at intermediate stages to mitigate systematic propagation\u0000from the early stages of data processing. Our analysis focuses on observations\u0000from the Murchison Widefield Array (MWA) Phase I configuration. Out of the\u0000observations processed by the pipeline, our approach selects 18%, totalling 58\u0000hours, that exhibit fewer systematic effects. The successful selection of\u0000observations with reduced systematic dominance enhances our confidence in\u0000achieving 21 cm measurements.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accounting for selection effects in supernova type Ia (SN Ia) cosmology is crucial for unbiased cosmological parameter inference -- even more so for the next generation of large, mostly photometric-only surveys. The conventional "bias correction" procedure has a built-in systematic bias towards the fiducial model used to derive it and fails to account for the additional Eddington bias that arises in the presence of significant redshift uncertainty. On the other hand, Bayesian hierarchical models scale poorly with the data set size and require explicit assumptions for the selection function that may be inaccurate or contrived. To address these limitations, we introduce STAR NRE, a simulation-based approach that makes use of a conditioned deep set neural network and combines efficient high-dimensional global inference with subsampling-based truncation in order to scale to very large survey sizes while training on sets with varying cardinality. Applying it to a simplified SN Ia model consisting of standardised brightnesses and redshifts with Gaussian uncertainties and a selection procedure based on the expected LSST sensitivity, we demonstrate precise and unbiased inference of cosmological parameters and the redshift evolution of the volumetric SN Ia rate from ~100 000 mock SNae Ia. Our inference procedure can incorporate arbitrarily complex selection criteria, including transient classification, in the forward simulator and be applied to complex data like light curves. We outline these and other steps aimed at integrating STAR NRE into an end-to-end simulation-based pipeline for the analysis of future photometric-only SN Ia data.
考虑 Ia 型超新星(SN Ia)宇宙学中的选择效应对于无偏宇宙学参数推断至关重要--对于下一代大型、主要是纯光度测量的巡天来说更是如此。传统的 "偏差校正 "程序会对用于推导的基准模型产生内在的系统性偏差,而且无法解释在存在显著红移不确定性的情况下产生的额外的爱丁顿偏差。另一方面,贝叶斯层次模型随着数据集规模的增大而缩小,并且要求对选择函数做出明确的假设,而这些假设可能是不准确的或臆造的。为了解决这些局限性,我们引入了 STAR NRE,这是一种基于模拟的方法,它利用有条件的深度集神经网络,将高效的高维全局推断与基于子抽样的截断相结合,以适应超大规模的调查,同时在具有不同心率的集上进行训练。我们将其应用于一个简化的SN I模型,该模型由标准化亮度和红移(具有高斯不确定性)以及基于预期LSST灵敏度的选择程序组成,我们展示了对宇宙学参数以及约100,000个模拟SNae Ia的体积SN Ia率红移演化的精确和无偏推断。我们概述了这些步骤和其他步骤,目的是将 STAR NRE 集成到基于模拟的端到端管道中,用于分析未来的纯测光 SN Ia 数据。
{"title":"STAR NRE: Solving supernova selection effects with set-based truncated auto-regressive neural ratio estimation","authors":"Konstantin Karchev, Roberto Trotta","doi":"arxiv-2409.03837","DOIUrl":"https://doi.org/arxiv-2409.03837","url":null,"abstract":"Accounting for selection effects in supernova type Ia (SN Ia) cosmology is\u0000crucial for unbiased cosmological parameter inference -- even more so for the\u0000next generation of large, mostly photometric-only surveys. The conventional\u0000\"bias correction\" procedure has a built-in systematic bias towards the fiducial\u0000model used to derive it and fails to account for the additional Eddington bias\u0000that arises in the presence of significant redshift uncertainty. On the other\u0000hand, Bayesian hierarchical models scale poorly with the data set size and\u0000require explicit assumptions for the selection function that may be inaccurate\u0000or contrived. To address these limitations, we introduce STAR NRE, a\u0000simulation-based approach that makes use of a conditioned deep set neural\u0000network and combines efficient high-dimensional global inference with\u0000subsampling-based truncation in order to scale to very large survey sizes while\u0000training on sets with varying cardinality. Applying it to a simplified SN Ia\u0000model consisting of standardised brightnesses and redshifts with Gaussian\u0000uncertainties and a selection procedure based on the expected LSST sensitivity,\u0000we demonstrate precise and unbiased inference of cosmological parameters and\u0000the redshift evolution of the volumetric SN Ia rate from ~100 000 mock SNae Ia.\u0000Our inference procedure can incorporate arbitrarily complex selection criteria,\u0000including transient classification, in the forward simulator and be applied to\u0000complex data like light curves. We outline these and other steps aimed at\u0000integrating STAR NRE into an end-to-end simulation-based pipeline for the\u0000analysis of future photometric-only SN Ia data.","PeriodicalId":501163,"journal":{"name":"arXiv - PHYS - Instrumentation and Methods for Astrophysics","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}