Pub Date : 2026-01-24DOI: 10.1016/j.ascom.2026.101061
Eleonora Villa , Golam Mohiuddin Shaifullah , Andrea Possenti , Carmelita Carbone
We present a detailed study of Bayesian inference workflows for pulsar timing array data with a focus on enhancing efficiency, robustness and speed through the use of normalizing flow-based nested sampling. Building on the Enterprise framework, we integrate the i-nessai sampler and benchmark its performance on realistic, simulated datasets. We analyze its computational scaling and stability, and show that it achieves accurate posteriors and reliable evidence estimates with substantially reduced runtime, by up to three orders of magnitude depending on the dataset configuration, with respect to conventional single-core parallel-tempering MCMC analyses. These results highlight the potential of flow-based nested sampling to accelerate PTA analyses while preserving the quality of the inference.
{"title":"Improving Bayesian inference in PTA data analysis: Importance nested sampling with Normalizing Flows","authors":"Eleonora Villa , Golam Mohiuddin Shaifullah , Andrea Possenti , Carmelita Carbone","doi":"10.1016/j.ascom.2026.101061","DOIUrl":"10.1016/j.ascom.2026.101061","url":null,"abstract":"<div><div>We present a detailed study of Bayesian inference workflows for pulsar timing array data with a focus on enhancing efficiency, robustness and speed through the use of normalizing flow-based nested sampling. Building on the <span>Enterprise</span> framework, we integrate the <span>i-nessai</span> sampler and benchmark its performance on realistic, simulated datasets. We analyze its computational scaling and stability, and show that it achieves accurate posteriors and reliable evidence estimates with substantially reduced runtime, by up to three orders of magnitude depending on the dataset configuration, with respect to conventional single-core parallel-tempering MCMC analyses. These results highlight the potential of flow-based nested sampling to accelerate PTA analyses while preserving the quality of the inference.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101061"},"PeriodicalIF":1.8,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-24DOI: 10.1016/j.ascom.2026.101062
Paolo Matteo Simonetti , Diego Turrini , Romolo Politi , Scigé J. Liu , Sergio Fonte , Danae Polychroni , Stavro Lambrov Ivanovski
Large n-body simulations with fully interacting objects represent the next frontier in computational planetary formation studies. In this paper, we present Mercury-Opal, the GPU-accelerated version of the n-body planet formation code Mercury-Ares. The porting to GPU computing has been performed through OpenACC to ensure cross-platform support and minimize the code restructuring efforts while retaining most of the performance increase expected from GPU computing. We tested Mercury-Opal against its parent code Mercury-Ares under conditions that put GPU computing at disadvantage and nevertheless show how the GPU-based execution provides advantages with respect to CPU-serial execution even for limited computational loads.
具有完全相互作用的物体的大型n体模拟代表了计算行星形成研究的下一个前沿。在本文中,我们提出了水星-蛋白石,gpu加速版本的n体行星形成代码水星- ar χ es。移植到GPU计算已经通过OpenACC执行,以确保跨平台支持和最小化代码重构工作,同时保留GPU计算预期的大部分性能提升。我们在使GPU计算处于劣势的条件下测试了Mercury-Opal与其父代码Mercury-Ar χ es,然而显示了基于GPU的执行如何在cpu串行执行方面提供优势,即使是有限的计算负载。
{"title":"Mercury-Opal: The GPU-accelerated version of the n-body code for planet formation Mercury-Arχes","authors":"Paolo Matteo Simonetti , Diego Turrini , Romolo Politi , Scigé J. Liu , Sergio Fonte , Danae Polychroni , Stavro Lambrov Ivanovski","doi":"10.1016/j.ascom.2026.101062","DOIUrl":"10.1016/j.ascom.2026.101062","url":null,"abstract":"<div><div>Large n-body simulations with fully interacting objects represent the next frontier in computational planetary formation studies. In this paper, we present <span>Mercury-Opal</span>, the GPU-accelerated version of the n-body planet formation code <span>Mercury-Ar</span> <span><math><mrow><mspace></mspace><mi>χ</mi><mspace></mspace></mrow></math></span> <span>es</span>. The porting to GPU computing has been performed through OpenACC to ensure cross-platform support and minimize the code restructuring efforts while retaining most of the performance increase expected from GPU computing. We tested <span>Mercury-Opal</span> <!--> <!-->against its parent code <span>Mercury-Ar</span> <span><math><mrow><mspace></mspace><mi>χ</mi><mspace></mspace></mrow></math></span> <span>es</span> <!--> <!-->under conditions that put GPU computing at disadvantage and nevertheless show how the GPU-based execution provides advantages with respect to CPU-serial execution even for limited computational loads.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101062"},"PeriodicalIF":1.8,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1016/j.ascom.2026.101071
F. Incardona , A. Costa , F. Farsian , F. Franchina , G. Leto , E. Mastriani , K. Munari , G. Pareschi , S. Scuderi , S. Spinello , G. Tosti , ASTRI Project
This study presents a Normal Behavior Model (NBM) developed to forecast monitoring time-series data from the ASTRI-Horn Cherenkov telescope under normal operating conditions. The analysis focused on 15 physical variables acquired by the Telescope Control Unit between September 2022 and July 2024, representing sensor measurements from the Azimuth and Elevation motors. After data cleaning, resampling, feature selection, and correlation analysis, the dataset was segmented into fixed-length intervals, in which the first samples represented the input sequence provided to the model, while the forecast length, , indicated the number of future time steps to be predicted. A sliding-window technique was then applied to increase the number of intervals. A Multi-Layer Perceptron (MLP) was trained to perform multivariate forecasting across all features simultaneously. Model performance was evaluated using the Mean Squared Error (MSE) and the Normalized Median Absolute Deviation (NMAD), and it was also benchmarked against a Long Short-Term Memory (LSTM) network. The MLP model demonstrated consistent results across different features and – configurations, and matched the performance of the LSTM while converging faster. It achieved an MSE of and an NMAD of on the test set under its best configuration (4 hidden layers, 720 units per layer, and – lengths of 300 samples each, corresponding to 5 h at 1-minute resolution). Extending the forecast horizon up to 6.5 h—the maximum allowed by this configuration—did not degrade performance, confirming the model’s effectiveness in providing reliable hour-scale predictions. The proposed NBM provides a powerful tool for enabling early anomaly detection in online ASTRI-Horn monitoring time series, offering a basis for the future development of a prognostics and health management system that supports predictive maintenance.
{"title":"Multivariate time-series forecasting of ASTRI-Horn monitoring data: A Normal Behavior Model","authors":"F. Incardona , A. Costa , F. Farsian , F. Franchina , G. Leto , E. Mastriani , K. Munari , G. Pareschi , S. Scuderi , S. Spinello , G. Tosti , ASTRI Project","doi":"10.1016/j.ascom.2026.101071","DOIUrl":"10.1016/j.ascom.2026.101071","url":null,"abstract":"<div><div>This study presents a Normal Behavior Model (NBM) developed to forecast monitoring time-series data from the ASTRI-Horn Cherenkov telescope under normal operating conditions. The analysis focused on 15 physical variables acquired by the Telescope Control Unit between September 2022 and July 2024, representing sensor measurements from the Azimuth and Elevation motors. After data cleaning, resampling, feature selection, and correlation analysis, the dataset was segmented into fixed-length intervals, in which the first <span><math><mi>I</mi></math></span> samples represented the input sequence provided to the model, while the forecast length, <span><math><mi>T</mi></math></span>, indicated the number of future time steps to be predicted. A sliding-window technique was then applied to increase the number of intervals. A Multi-Layer Perceptron (MLP) was trained to perform multivariate forecasting across all features simultaneously. Model performance was evaluated using the Mean Squared Error (MSE) and the Normalized Median Absolute Deviation (NMAD), and it was also benchmarked against a Long Short-Term Memory (LSTM) network. The MLP model demonstrated consistent results across different features and <span><math><mi>I</mi></math></span>–<span><math><mi>T</mi></math></span> configurations, and matched the performance of the LSTM while converging faster. It achieved an MSE of <span><math><mrow><mn>0</mn><mo>.</mo><mn>019</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>003</mn></mrow></math></span> and an NMAD of <span><math><mrow><mn>0</mn><mo>.</mo><mn>032</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>009</mn></mrow></math></span> on the test set under its best configuration (4 hidden layers, 720 units per layer, and <span><math><mi>I</mi></math></span>–<span><math><mi>T</mi></math></span> lengths of 300 samples each, corresponding to 5 h at 1-minute resolution). Extending the forecast horizon up to 6.5 h—the maximum allowed by this configuration—did not degrade performance, confirming the model’s effectiveness in providing reliable hour-scale predictions. The proposed NBM provides a powerful tool for enabling early anomaly detection in online ASTRI-Horn monitoring time series, offering a basis for the future development of a prognostics and health management system that supports predictive maintenance.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101071"},"PeriodicalIF":1.8,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1016/j.ascom.2026.101059
Qingchuan Zhao
We present a compact and fully reproducible workflow for star–galaxy separation in the Dark Energy Survey Data Release 2 (DES DR2) over . Using only two widely available catalog attributes—MAG_AUTO_I (Kron-like magnitude) and SPREAD_MODEL_I [a point spread function (PSF)–extended morphology discriminant]—we train slice-wise logistic-regression models against the survey’s internal morphology summary EXTENDED_CLASS_COADD. Performance is reported as a function of magnitude, and slice-wise class fractions are quantified, showing smooth variation from bright to faint regimes without severe class imbalance in most slices (imbalance increases toward the faint edge). Across most slices the baseline reproduces the internal label with high discriminative metrics (precision, recall, , and AUC), while Brier scores and reliability curves highlight calibration challenges toward the faint end. We also examine the impact of seeing and cross-validation strategy (random vs. tile-wise), and test a minimal feature-augmentation experiment. All SQL, derived catalogs, metrics tables, and figure notebooks are released under a citable dataset on Zenodo (DOI: 10.5281/zenodo.17688656) to enable reproduction. The main contribution is a transparent, pedagogical baseline that can serve as a reproducible sanity check and a starting point for more sophisticated classifiers in DES-like surveys.
{"title":"Reproducible star–galaxy separation in DES DR2 18≤i<24: A minimal machine-learning baseline with slice-wise metrics, calibration diagnostics, and visualization mosaics","authors":"Qingchuan Zhao","doi":"10.1016/j.ascom.2026.101059","DOIUrl":"10.1016/j.ascom.2026.101059","url":null,"abstract":"<div><div>We present a compact and fully reproducible workflow for star–galaxy separation in the Dark Energy Survey Data Release 2 (DES DR2) over <span><math><mrow><mn>18</mn><mo>≤</mo><mi>MAG_AUTO_I</mi><mo><</mo><mn>24</mn></mrow></math></span>. Using only two widely available catalog attributes—<span>MAG_AUTO_I</span> (Kron-like magnitude) and <span>SPREAD_MODEL_I</span> [a point spread function (PSF)–extended morphology discriminant]—we train slice-wise logistic-regression models against the survey’s internal morphology summary <span>EXTENDED_CLASS_COADD</span>. Performance is reported as a function of magnitude, and slice-wise class fractions are quantified, showing smooth variation from bright to faint regimes without severe class imbalance in most slices (imbalance increases toward the faint edge). Across most slices the baseline reproduces the internal label with high discriminative metrics (precision, recall, <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>, and AUC), while Brier scores and reliability curves highlight calibration challenges toward the faint end. We also examine the impact of seeing and cross-validation strategy (random vs. tile-wise), and test a minimal feature-augmentation experiment. All SQL, derived catalogs, metrics tables, and figure notebooks are released under a citable dataset on Zenodo (DOI: <span><span>10.5281/zenodo.17688656</span><svg><path></path></svg></span>) to enable reproduction. The main contribution is a transparent, pedagogical baseline that can serve as a reproducible sanity check and a starting point for more sophisticated classifiers in DES-like surveys.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101059"},"PeriodicalIF":1.8,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1016/j.ascom.2026.101060
G. Lacopo , M.D. Lepinzan , D. Goz , G. Taffoni , L. Tornatore , P. Monaco , P.J. Elahi , U. Varetto , M. Cytowski , L. Riha
The increasing complexity and scale of cosmological N-body simulations, driven by astronomical surveys like Euclid, call for a paradigm shift towards more sustainable and energy-efficient high-performance computing (HPC). The rising energy consumption of supercomputing facilities poses a significant environmental and financial challenge.
In this work, we build upon a recently developed GPU implementation of PINOCCHIO, a widely-used tool for the fast generation of dark matter (DM) halo catalogs, to investigate energy consumption. Using a different resource configuration, we confirmed the time-to-solution behavior observed in a companion study, and we use these runs to compare time-to-solution with energy-to-solution.
By profiling the code on various HPC platforms with a newly developed implementation of the Power Measurement Toolkit (PMT), we demonstrate an reduction in energy-to-solution and speed-up in time-to-solution compared to the CPU-only version. Taken together, these gains translate into an overall efficiency improvement of up to . Our results show that the GPU-accelerated PINOCCHIO not only achieves substantial speed-up, making the generation of large-scale mock catalogs more tractable, but also significantly reduces the energy footprint of the simulations. This work represents an step towards “green-aware” scientific computing in cosmology, proving that performance and sustainability can be simultaneously achieved.
{"title":"Accelerating cosmological simulations on GPUs: A step towards sustainability and green-awareness","authors":"G. Lacopo , M.D. Lepinzan , D. Goz , G. Taffoni , L. Tornatore , P. Monaco , P.J. Elahi , U. Varetto , M. Cytowski , L. Riha","doi":"10.1016/j.ascom.2026.101060","DOIUrl":"10.1016/j.ascom.2026.101060","url":null,"abstract":"<div><div>The increasing complexity and scale of cosmological N-body simulations, driven by astronomical surveys like Euclid, call for a paradigm shift towards more sustainable and energy-efficient high-performance computing (HPC). The rising energy consumption of supercomputing facilities poses a significant environmental and financial challenge.</div><div>In this work, we build upon a recently developed GPU implementation of <span>PINOCCHIO</span>, a widely-used tool for the fast generation of dark matter (DM) halo catalogs, to investigate energy consumption. Using a different resource configuration, we confirmed the time-to-solution behavior observed in a companion study, and we use these runs to compare time-to-solution with energy-to-solution.</div><div>By profiling the code on various HPC platforms with a newly developed implementation of the Power Measurement Toolkit (PMT), we demonstrate an <span><math><mrow><mn>8</mn><mo>×</mo></mrow></math></span> reduction in energy-to-solution and <span><math><mrow><mn>8</mn><mo>×</mo></mrow></math></span> speed-up in time-to-solution compared to the CPU-only version. Taken together, these gains translate into an overall efficiency improvement of up to <span><math><mrow><mn>64</mn><mo>×</mo></mrow></math></span>. Our results show that the GPU-accelerated <span>PINOCCHIO</span> not only achieves substantial speed-up, making the generation of large-scale mock catalogs more tractable, but also significantly reduces the energy footprint of the simulations. This work represents an step towards “green-aware” scientific computing in cosmology, proving that performance and sustainability can be simultaneously achieved.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101060"},"PeriodicalIF":1.8,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-06DOI: 10.1016/j.ascom.2025.101058
Xiaotong Li, Karel Adámek, Wesley Armour
Ultra-long-period (ULP) pulsars, a newly identified class of celestial transients, offer unique insights into astrophysics, though very few have been detected to date. In radio astronomy, most time-domain detection methods cannot find these pulsars, and current image-based detection approaches still face challenges, including low sensitivity, high false positive rate, and low computational efficiency. In this article, we develop Fast Imaging Trigger (FITrig), a GPU-accelerated, statistics-based method for ULP pulsar detection and localisation. FITrig includes two complementary approaches — an image domain and an image-frequency domain strategy. FITrig offers advantages by increasing sensitivity to faint pulsars, suppressing false positives (from noise, processing artefacts, or steady sources), and improving search efficiency in large-scale wide-field images. Compared to the state-of-the-art source finder SOFIA 2, FITrig increases the detection speed by 4.3 times for large images ( pixels) and reduces false positives by up to 858.8 times (at 6 significance) for the image domain branch, while the image-frequency domain branch suppresses false positives even further. FITrig maintains the capability to detect pulsars that are 20 times fainter than surrounding steady features, even under critical Nyquist sampling conditions. In this article, the performance of FITrig is demonstrated using both real-world data (MeerKAT observations of PSR J0901-4046) and simulated datasets based on MeerKAT and SKA Array Assembly (AA) 2 telescope configurations. With its real-time processing capabilities and scalability, FITrig is a promising tool for next-generation telescopes, such as the SKA, with the potential to uncover hidden ULP pulsars.
{"title":"FITrig: A high-performance detection technique for efficient Ultra-Long-Period Pulsars","authors":"Xiaotong Li, Karel Adámek, Wesley Armour","doi":"10.1016/j.ascom.2025.101058","DOIUrl":"10.1016/j.ascom.2025.101058","url":null,"abstract":"<div><div>Ultra-long-period (ULP) pulsars, a newly identified class of celestial transients, offer unique insights into astrophysics, though very few have been detected to date. In radio astronomy, most time-domain detection methods cannot find these pulsars, and current image-based detection approaches still face challenges, including low sensitivity, high false positive rate, and low computational efficiency. In this article, we develop Fast Imaging Trigger (FITrig), a GPU-accelerated, statistics-based method for ULP pulsar detection and localisation. FITrig includes two complementary approaches — an image domain and an image-frequency domain strategy. FITrig offers advantages by increasing sensitivity to faint pulsars, suppressing false positives (from noise, processing artefacts, or steady sources), and improving search efficiency in large-scale wide-field images. Compared to the state-of-the-art source finder SOFIA 2, FITrig increases the detection speed by 4.3 times for large images (<span><math><mrow><mn>50</mn><mi>K</mi><mo>×</mo><mn>50</mn><mi>K</mi></mrow></math></span> pixels) and reduces false positives by up to 858.8 times (at 6<span><math><mi>σ</mi></math></span> significance) for the image domain branch, while the image-frequency domain branch suppresses false positives even further. FITrig maintains the capability to detect pulsars that are 20 times fainter than surrounding steady features, even under critical Nyquist sampling conditions. In this article, the performance of FITrig is demonstrated using both real-world data (MeerKAT observations of PSR J0901-4046) and simulated datasets based on MeerKAT and SKA Array Assembly (AA) 2 telescope configurations. With its real-time processing capabilities and scalability, FITrig is a promising tool for next-generation telescopes, such as the SKA, with the potential to uncover hidden ULP pulsars.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101058"},"PeriodicalIF":1.8,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1016/j.ascom.2025.101055
G. Guilluy , P. Giacobbe , F. Amadori , G. Quaglia , A.S. Bonomo
Spectral retrieval is a fundamental tool for investigating the chemical composition and physical properties of exoplanetary atmospheres. These retrievals rely on reconstructing the path of photons from the host star through the exoplanet’s atmosphere, a task typically accomplished by solving the radiative transfer (RT) equations. This calculation constitutes one of the main computational bottlenecks of retrievals. Another significant bottleneck arises from the need to generate a very large number of models and compare them with observations in order to explore the parameter space within a Bayesian framework. In this work, we focused on improving both of these critical bottlenecks within our framework GUIBRUSH® (Graphic User Interface for Bayesian Retrieval Using High Resolution Spectroscopy). First, we optimised the efficiency of our Bayesian analysis tool, parallelising both forward-model computation and its comparison with observational data. This strategy yielded a performance improvement of approximately relative to the original implementation. Secondly, we accelerated the RT calculation. We first benchmarked the performance of two widely adopted Python-based packages, petitRADTRANS and PyratBay, on the hot Jupiter WASP-127b. We found that in the spectral band we investigated (namely, 0.95– in the near-infrared), PyratBay ran approximately twice as fast as petitRADTRANS on CPU, motivating its adoption as the baseline for further optimisation. We then implemented a GPU-accelerated version of PyratBay by parallelising the computation of the optical depth and transmission spectrum across the wavelength domain using PyCUDA, which provides a seamless interface between Python and NVIDIA’s CUDA framework. When computing 100 models, the GPU implementation of PyratBay achieved a median speed-up of approximately per model compared to the CPU version. To extend this gain to full retrievals, we integrated the GPU version with Python’s multiprocessing-pool, enabling large model grids to be evaluated in parallel. For our test case on WASP-127 b, the total runtime to compute 99123 models (corresponding to the number of iterations required for the retrieval to converge) was reduced from 173082.4 s to 10046.43 s.
We are now working on integrating the GPU-accelerated PyratBay version directly into GUIBRUSH®, enabling fully GPU-powered atmospheric retrievals.
{"title":"Speeding Up the GUIBRUSH® retrieval code for modelling exoplanetary atmospheres","authors":"G. Guilluy , P. Giacobbe , F. Amadori , G. Quaglia , A.S. Bonomo","doi":"10.1016/j.ascom.2025.101055","DOIUrl":"10.1016/j.ascom.2025.101055","url":null,"abstract":"<div><div>Spectral retrieval is a fundamental tool for investigating the chemical composition and physical properties of exoplanetary atmospheres. These retrievals rely on reconstructing the path of photons from the host star through the exoplanet’s atmosphere, a task typically accomplished by solving the radiative transfer (RT) equations. This calculation constitutes one of the main computational bottlenecks of retrievals. Another significant bottleneck arises from the need to generate a very large number of models and compare them with observations in order to explore the parameter space within a Bayesian framework. In this work, we focused on improving both of these critical bottlenecks within our framework GUIBRUSH® (Graphic User Interface for Bayesian Retrieval Using High Resolution Spectroscopy). First, we optimised the efficiency of our Bayesian analysis tool, parallelising both forward-model computation and its comparison with observational data. This strategy yielded a performance improvement of approximately <span><math><mrow><mo>∼</mo><mn>10</mn><mo>×</mo></mrow></math></span> relative to the original implementation. Secondly, we accelerated the RT calculation. We first benchmarked the performance of two widely adopted Python-based packages, <span>petitRADTRANS</span> and <span>PyratBay</span>, on the hot Jupiter WASP-127b. We found that in the spectral band we investigated (namely, 0.95–<span><math><mrow><mn>2</mn><mo>.</mo><mn>45</mn><mspace></mspace><mi>μ</mi><mi>m</mi></mrow></math></span> in the near-infrared), <span>PyratBay</span> ran approximately twice as fast as <span>petitRADTRANS</span> on CPU, motivating its adoption as the baseline for further optimisation. We then implemented a GPU-accelerated version of <span>PyratBay</span> by parallelising the computation of the optical depth and transmission spectrum across the wavelength domain using <span>PyCUDA</span>, which provides a seamless interface between Python and NVIDIA’s <span>CUDA</span> framework. When computing 100 models, the GPU implementation of <span>PyratBay</span> achieved a median speed-up of approximately <span><math><mrow><mn>3</mn><mo>.</mo><mn>4</mn><mo>×</mo></mrow></math></span> per model compared to the CPU version. To extend this gain to full retrievals, we integrated the GPU version with Python’s <span>multiprocessing-pool</span>, enabling large model grids to be evaluated in parallel. For our test case on WASP-127<!--> <!-->b, the total runtime to compute 99123 models (corresponding to the number of iterations required for the retrieval to converge) was reduced from 173082.4 s to 10046.43 s.</div><div>We are now working on integrating the GPU-accelerated <span>PyratBay</span> version directly into GUIBRUSH®, enabling fully GPU-powered atmospheric retrievals.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101055"},"PeriodicalIF":1.8,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.ascom.2025.101056
L. Brolli , C. Fruncillo , S. Zimotti , S. Tortora , L. Maina , A. Petrone , M. Gai , D. Busonero
NeuroStarMap aims at providing Neural Network (NN) tools for access to the Gaia catalogue source classes supporting the cosmic distance ladder materialization, namely Cepheids, RR Lyrae and eclipsing binaries. The tools are trained, tested and validated on Gaia DR3 objects, and are expected to be compatible (via update and upgrade) with the forthcoming DR4 and DR5 catalogue releases. The practical goal is the implementation of tools fed by suitable photometric and variability data, able to provide adequate estimate of the target distance, through its proxy, i.e. parallax, consistently with the direct Gaia determination. We discuss the available dataset characteristics, the filtering and pre-processing applied to ensure proper neural encoding, the NN model selection and the current status of dataset fitting. The proposed solution, labeled ParallaxPredictorMXL, is a heterogeneous combination of simpler regression models, providing the best match to the complex dataset information structure.
{"title":"NeuroStarMap: Neural Network encoding of Gaia’s distance ladder","authors":"L. Brolli , C. Fruncillo , S. Zimotti , S. Tortora , L. Maina , A. Petrone , M. Gai , D. Busonero","doi":"10.1016/j.ascom.2025.101056","DOIUrl":"10.1016/j.ascom.2025.101056","url":null,"abstract":"<div><div>NeuroStarMap aims at providing Neural Network (NN) tools for access to the Gaia catalogue source classes supporting the cosmic distance ladder materialization, namely Cepheids, RR Lyrae and eclipsing binaries. The tools are trained, tested and validated on Gaia DR3 objects, and are expected to be compatible (via update and upgrade) with the forthcoming DR4 and DR5 catalogue releases. The practical goal is the implementation of tools fed by suitable photometric and variability data, able to provide adequate estimate of the target distance, through its proxy, i.e. parallax, consistently with the direct Gaia determination. We discuss the available dataset characteristics, the filtering and pre-processing applied to ensure proper neural encoding, the NN model selection and the current status of dataset fitting. The proposed solution, labeled <strong>ParallaxPredictorMXL</strong>, is a heterogeneous combination of simpler regression models, providing the best match to the complex dataset information structure.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101056"},"PeriodicalIF":1.8,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.ascom.2025.101048
Md. Fairuz Siddiquee , Md Mehedi Hasan , Shifat E. Arman , Md. Shahedul Islam , AKM Azad
Celestial classification, traditionally based on spectral analysis, helps understand the characteristics and distribution of solar radiation, aiding in the design of solar sail technology and potentially reducing energy costs in space missions. This research investigates the spectral and morphological classification of celestial entities by integrating feature engineering with astrophysical knowledge and principles, utilizing Machine Learning (ML) methodologies. These insights enabled the careful enhancement of the feature set, resulting in the systematic elimination of irrelevant and unstructured data, thereby improving both the model’s accuracy and its computing efficiency. The examination of the Sloan Digital Sky Survey (SDSS) dataset highlights redshift and near-infrared measurements (i and z filters) as crucial spectral parameters for classifying stars, galaxies, and quasars. Feature selection streamlined the dataset from 17 initial features to the most pertinent filters (u, g, r, i, z) and redshift, thereby enhancing computational efficiency and model correctness. The Random Forest classifier attained the best accuracy (98%) across all classes by utilizing these features, surpassing both k-nearest neighbors (k-NN) and support vector machines (SVM). For morphological classification, the YOLOv5, YOLOv7, and YOLOv8 models were trained on a tailored dataset to classify galaxies into five morphological categories: Elliptical, Spiral, Irregular, Merging, and Peculiar. Quantitative research indicated that YOLOv8 achieved the highest performance, with 95.5% precision across all galaxy classifications and an overall recall of 73.7%, underscoring its effectiveness in identifying various galaxy morphologies. This comprehensive investigation enhances model interpretability and accuracy, underscores the efficacy of astrophysically motivated features, and establishes a robust framework for real-time large data analysis in astrophysical research, providing a benchmark for industrial applications through advanced data-driven approaches.
天体分类传统上基于光谱分析,有助于了解太阳辐射的特征和分布,有助于设计太阳帆技术,并有可能降低太空任务中的能源成本。本研究将特征工程与天体物理学知识和原理相结合,利用机器学习(ML)方法对天体实体的光谱和形态分类进行了研究。这些见解使特征集的仔细增强成为可能,从而系统地消除不相关和非结构化的数据,从而提高模型的准确性和计算效率。斯隆数字巡天(SDSS)数据集的检查突出了红移和近红外测量(i和z滤波器)作为分类恒星,星系和类星体的关键光谱参数。特征选择将数据集从17个初始特征精简到最相关的过滤器(u, g, r, i, z)和红移,从而提高了计算效率和模型正确性。通过利用这些特征,随机森林分类器在所有类别中获得了最佳准确率(98%),超过了k-近邻(k-NN)和支持向量机(SVM)。在形态学分类方面,YOLOv5、YOLOv7和YOLOv8模型在定制的数据集上进行训练,将星系分为五种形态类别:椭圆、螺旋、不规则、合并和奇特。定量研究表明,YOLOv8获得了最高的性能,在所有星系分类中准确率为95.5%,总召回率为73.7%,强调了它在识别各种星系形态方面的有效性。这项综合研究提高了模型的可解释性和准确性,强调了天体物理驱动特征的有效性,并为天体物理研究中的实时大数据分析建立了一个强大的框架,通过先进的数据驱动方法为工业应用提供了基准。
{"title":"Exploring celestial classification: Astrophysical features-guided machine learning for spectral and morphological analysis","authors":"Md. Fairuz Siddiquee , Md Mehedi Hasan , Shifat E. Arman , Md. Shahedul Islam , AKM Azad","doi":"10.1016/j.ascom.2025.101048","DOIUrl":"10.1016/j.ascom.2025.101048","url":null,"abstract":"<div><div>Celestial classification, traditionally based on spectral analysis, helps understand the characteristics and distribution of solar radiation, aiding in the design of solar sail technology and potentially reducing energy costs in space missions. This research investigates the spectral and morphological classification of celestial entities by integrating feature engineering with astrophysical knowledge and principles, utilizing Machine Learning (ML) methodologies. These insights enabled the careful enhancement of the feature set, resulting in the systematic elimination of irrelevant and unstructured data, thereby improving both the model’s accuracy and its computing efficiency. The examination of the Sloan Digital Sky Survey (SDSS) dataset highlights redshift and near-infrared measurements (i and z filters) as crucial spectral parameters for classifying stars, galaxies, and quasars. Feature selection streamlined the dataset from 17 initial features to the most pertinent filters (u, g, r, i, z) and redshift, thereby enhancing computational efficiency and model correctness. The Random Forest classifier attained the best accuracy (98%) across all classes by utilizing these features, surpassing both k-nearest neighbors (k-NN) and support vector machines (SVM). For morphological classification, the YOLOv5, YOLOv7, and YOLOv8 models were trained on a tailored dataset to classify galaxies into five morphological categories: Elliptical, Spiral, Irregular, Merging, and Peculiar. Quantitative research indicated that YOLOv8 achieved the highest performance, with 95.5% precision across all galaxy classifications and an overall recall of 73.7%, underscoring its effectiveness in identifying various galaxy morphologies. This comprehensive investigation enhances model interpretability and accuracy, underscores the efficacy of astrophysically motivated features, and establishes a robust framework for real-time large data analysis in astrophysical research, providing a benchmark for industrial applications through advanced data-driven approaches.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101048"},"PeriodicalIF":1.8,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1016/j.ascom.2025.101057
Wan Aiman Hakimie Wan Abdul Hadi , Muhamad Syazwan Faid , Mohd Saiful Anwar Mohd Nawawi , Raihana Abdul Wahab , Nazhatulshima Ahmad , Mohd Zambri Zainuddin , Ahmad Adib Rofiuddin , Muhammad Ridzuan Hashim
The detection of the lunar crescent is a fundamental challenge in observational astronomy, particularly in the context of time-sensitive astronomical phenomena. This study presents a computational approach for automated lunar crescent extraction from astronomical images using Python-based vision algorithms. While previous efforts in this domain have employed image processing techniques, they were often constrained by dataset bias and limited empirical testing on real-world imagery. In this work, a total of 67 observational lunar images from the Optical Astronomy Research Laboratory (OpARL), spanning 2000 to 2025, were analysed using a sequence of digital image processing techniques including grayscale masking, Gaussian filtering, edge detection, contour enhancement, and object recognition. The approach achieved a detection success rate of 70.15% in predicting a lunar crescent appearance in an imaging. The result also finds correlations between detection outcomes and lunar altitude and elongation. The findings demonstrate the effectiveness of integrating classical image processing pipelines with astronomical datasets for reliable crescent identification. This improves the process of identification of an appearance of a lunar crescent image during live observations or post processing.
{"title":"Automated lunar crescent extraction from astronomical imaging using python-based vision algorithms","authors":"Wan Aiman Hakimie Wan Abdul Hadi , Muhamad Syazwan Faid , Mohd Saiful Anwar Mohd Nawawi , Raihana Abdul Wahab , Nazhatulshima Ahmad , Mohd Zambri Zainuddin , Ahmad Adib Rofiuddin , Muhammad Ridzuan Hashim","doi":"10.1016/j.ascom.2025.101057","DOIUrl":"10.1016/j.ascom.2025.101057","url":null,"abstract":"<div><div>The detection of the lunar crescent is a fundamental challenge in observational astronomy, particularly in the context of time-sensitive astronomical phenomena. This study presents a computational approach for automated lunar crescent extraction from astronomical images using Python-based vision algorithms. While previous efforts in this domain have employed image processing techniques, they were often constrained by dataset bias and limited empirical testing on real-world imagery. In this work, a total of 67 observational lunar images from the Optical Astronomy Research Laboratory (OpARL), spanning 2000 to 2025, were analysed using a sequence of digital image processing techniques including grayscale masking, Gaussian filtering, edge detection, contour enhancement, and object recognition. The approach achieved a detection success rate of 70.15% in predicting a lunar crescent appearance in an imaging. The result also finds correlations between detection outcomes and lunar altitude and elongation. The findings demonstrate the effectiveness of integrating classical image processing pipelines with astronomical datasets for reliable crescent identification. This improves the process of identification of an appearance of a lunar crescent image during live observations or post processing.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101057"},"PeriodicalIF":1.8,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}