Pub Date : 2025-11-19DOI: 10.1016/j.ascom.2025.101027
Panagiotis N. Sakellariou , Spiros V. Georgakopoulos , Sotiris Tasoulis , Vassilis P. Plagianakos
The detection of gravitational waves has revolutionized our ability to explore fundamental aspects of the Universe. Traditionally, modeled gravitational-wave signals have been identified using template-based matched filtering, followed by coincidence analysis across multiple detectors in the signal-to-noise ratio time series. Recent advances in Machine Learning and Deep Learning have sparked growing interest in their application to both signal detection and parameter estimation. In this study, a hybrid Deep Learning strategy is proposed that leverages the effectiveness of Transformer encoders alongside well-established Convolutional Neural Network architectures in an attempt to estimate the intrinsic and extrinsic parameters of non-precessing binary black hole systems. The primary focus of this work is point estimation, producing single best-fit values for each parameter rather than full posterior distributions. This method is evaluated on both simulated signals embedded in Gaussian noise and real gravitational-wave events, and it demonstrates strong predictive performance and robustness across key astrophysical parameters.
{"title":"Binary black hole parameter estimation with hybrid CNN-Transformer Neural Networks","authors":"Panagiotis N. Sakellariou , Spiros V. Georgakopoulos , Sotiris Tasoulis , Vassilis P. Plagianakos","doi":"10.1016/j.ascom.2025.101027","DOIUrl":"10.1016/j.ascom.2025.101027","url":null,"abstract":"<div><div>The detection of gravitational waves has revolutionized our ability to explore fundamental aspects of the Universe. Traditionally, modeled gravitational-wave signals have been identified using template-based matched filtering, followed by coincidence analysis across multiple detectors in the signal-to-noise ratio time series. Recent advances in Machine Learning and Deep Learning have sparked growing interest in their application to both signal detection and parameter estimation. In this study, a hybrid Deep Learning strategy is proposed that leverages the effectiveness of Transformer encoders alongside well-established Convolutional Neural Network architectures in an attempt to estimate the intrinsic and extrinsic parameters of non-precessing binary black hole systems. The primary focus of this work is point estimation, producing single best-fit values for each parameter rather than full posterior distributions. This method is evaluated on both simulated signals embedded in Gaussian noise and real gravitational-wave events, and it demonstrates strong predictive performance and robustness across key astrophysical parameters.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101027"},"PeriodicalIF":1.8,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1016/j.ascom.2025.101024
Seungwan Han , Wonseok Kang , Jae-Hun Jung
Stellar classification based on the Morgan–Keenan (MK) system has long been a fundamental task in astronomy. Numerous studies have attempted to automate this process using machine learning (ML) applied to spectra from digital archives. However, these archives require wavelength calibration — a complex and time-consuming procedure — and spectral type determination relies on expert knowledge. As a result, the available dataset remains limited, containing no more than 1,500 reliably classified spectra for use in independent classification studies. To address this limitation, we constructed a large-scale dataset using stars previously classified in Nancy Houk’s catalog, which provides the coordinates and spectral types of stars observed on objective prism plates. Based on this information, we developed an algorithm to extract stellar spectra from the plates and associate them with the corresponding spectral types listed in the catalog. From a total of 1,064 plates, we obtained 91,050 stellar images and successfully extracted 70,360 usable spectra. For classification, we employed a convolutional neural network (CNN) and introduced a Gaussian encoding method, which better captures the continuous nature of spectral subclasses than conventional one-hot encoding. Our CNN model achieved an accuracy of 41.5% in classifying 49 spectral subclasses, slightly outperforming previous state-of-the-art models that reported 41.2%.
{"title":"Stellar spectral classification using convolutional neural networks on objective prism plates","authors":"Seungwan Han , Wonseok Kang , Jae-Hun Jung","doi":"10.1016/j.ascom.2025.101024","DOIUrl":"10.1016/j.ascom.2025.101024","url":null,"abstract":"<div><div>Stellar classification based on the Morgan–Keenan (MK) system has long been a fundamental task in astronomy. Numerous studies have attempted to automate this process using machine learning (ML) applied to spectra from digital archives. However, these archives require wavelength calibration — a complex and time-consuming procedure — and spectral type determination relies on expert knowledge. As a result, the available dataset remains limited, containing no more than 1,500 reliably classified spectra for use in independent classification studies. To address this limitation, we constructed a large-scale dataset using stars previously classified in Nancy Houk’s catalog, which provides the coordinates and spectral types of stars observed on objective prism plates. Based on this information, we developed an algorithm to extract stellar spectra from the plates and associate them with the corresponding spectral types listed in the catalog. From a total of 1,064 plates, we obtained 91,050 stellar images and successfully extracted 70,360 usable spectra. For classification, we employed a convolutional neural network (CNN) and introduced a Gaussian encoding method, which better captures the continuous nature of spectral subclasses than conventional one-hot encoding. Our CNN model achieved an accuracy of 41.5% in classifying 49 spectral subclasses, slightly outperforming previous state-of-the-art models that reported 41.2%.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101024"},"PeriodicalIF":1.8,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1016/j.ascom.2025.101022
Evgeny A. Smirnov
This paper presents a major enhancement to the resonances Python package that now implements full support for identifying and analyzing secular resonances. Building upon the established mean-motion resonance framework, the implementation introduces: (1) a flexible mathematical expression parser supporting arbitrary combinations of fundamental frequencies (, , , ), enabling analysis of both linear resonances (, , ) and more than 70 nonlinear resonances from the literature; (2) specialized libration detection algorithms optimized for secular timescales, with automated parameter adaptation for extended integration times; (3) integration with existing mean-motion resonance workflows through consistent interfaces, allowing unified dynamical studies. The package has been tested through automated unit and integration tests and manual validation against examples from the literature, with all test cases—including , , , , , and resonances passed successfully (with minor exceptions). The new version maintains the simplicity of the original interface, requiring only 3–4 lines of code for standard analyses, while providing researchers with powerful tools for systematic dynamical analysis and asteroid family studies. The package is available on GitHub under the MIT license.
{"title":"Implementation of secular resonance support in the open-source python package “resonances”","authors":"Evgeny A. Smirnov","doi":"10.1016/j.ascom.2025.101022","DOIUrl":"10.1016/j.ascom.2025.101022","url":null,"abstract":"<div><div>This paper presents a major enhancement to the <span>resonances</span> Python package that now implements full support for identifying and analyzing secular resonances. Building upon the established mean-motion resonance framework, the implementation introduces: (1) a flexible mathematical expression parser supporting arbitrary combinations of fundamental frequencies (<span><math><mi>g</mi></math></span>, <span><math><mi>s</mi></math></span>, <span><math><msub><mrow><mi>g</mi></mrow><mrow><mi>i</mi></mrow></msub></math></span>, <span><math><msub><mrow><mi>s</mi></mrow><mrow><mi>i</mi></mrow></msub></math></span>), enabling analysis of both linear resonances (<span><math><msub><mrow><mi>ν</mi></mrow><mrow><mn>5</mn></mrow></msub></math></span>, <span><math><msub><mrow><mi>ν</mi></mrow><mrow><mn>6</mn></mrow></msub></math></span>, <span><math><msub><mrow><mi>ν</mi></mrow><mrow><mn>16</mn></mrow></msub></math></span>) and more than 70 nonlinear resonances from the literature; (2) specialized libration detection algorithms optimized for secular timescales, with automated parameter adaptation for extended integration times; (3) integration with existing mean-motion resonance workflows through consistent interfaces, allowing unified dynamical studies. The package has been tested through automated unit and integration tests and manual validation against examples from the literature, with all test cases—including <span><math><msub><mrow><mi>ν</mi></mrow><mrow><mn>6</mn></mrow></msub></math></span>, <span><math><msub><mrow><mi>ν</mi></mrow><mrow><mn>16</mn></mrow></msub></math></span>, <span><math><msub><mrow><mi>z</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>, <span><math><msub><mrow><mi>z</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>, <span><math><mrow><mn>2</mn><msub><mrow><mi>ν</mi></mrow><mrow><mn>6</mn></mrow></msub><mo>−</mo><msub><mrow><mi>ν</mi></mrow><mrow><mn>5</mn></mrow></msub></mrow></math></span>, and <span><math><mrow><mn>3</mn><msub><mrow><mi>ν</mi></mrow><mrow><mn>6</mn></mrow></msub><mo>−</mo><mn>2</mn><msub><mrow><mi>ν</mi></mrow><mrow><mn>5</mn></mrow></msub></mrow></math></span> resonances passed successfully (with minor exceptions). The new version maintains the simplicity of the original interface, requiring only 3–4 lines of code for standard analyses, while providing researchers with powerful tools for systematic dynamical analysis and asteroid family studies. The package is available on GitHub under the MIT license.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101022"},"PeriodicalIF":1.8,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Precise and accurate estimation of cosmological parameters is crucial for understanding the Universe’s dynamics and addressing cosmological tensions. In this methods paper, we explore bio-inspired metaheuristic algorithms, including the Improved Multi-Operator Differential Evolution scheme and the Philippine Eagle Optimization Algorithm (PEOA), alongside the relatively known genetic algorithm, for cosmological parameter estimation. Using mock data that underlay a true fiducial cosmology, we test the viability of each optimization method to recover the input cosmological parameters with confidence regions generated by bootstrapping on top of optimization. We compare the results with Markov chain Monte Carlo (MCMC) in terms of accuracy and precision, and show that PEOA performs comparably well under the specific circumstances provided. Understandably, Bayesian inference and optimization serve distinct purposes, but comparing them highlights the potential of nature-inspired algorithms in cosmological analysis, offering alternative pathways to explore parameter spaces and validate standard results.
{"title":"Nature-inspired optimization, the Philippine Eagle, and cosmological parameter estimation","authors":"Reginald Christian Bernardo , Erika Antonette Enriquez , Renier Mendoza , Reinabelle Reyes , Arrianne Crystal Velasco","doi":"10.1016/j.ascom.2025.101026","DOIUrl":"10.1016/j.ascom.2025.101026","url":null,"abstract":"<div><div>Precise and accurate estimation of cosmological parameters is crucial for understanding the Universe’s dynamics and addressing cosmological tensions. In this methods paper, we explore bio-inspired metaheuristic algorithms, including the Improved Multi-Operator Differential Evolution scheme and the Philippine Eagle Optimization Algorithm (PEOA), alongside the relatively known genetic algorithm, for cosmological parameter estimation. Using mock data that underlay a true fiducial cosmology, we test the viability of each optimization method to recover the input cosmological parameters with confidence regions generated by bootstrapping on top of optimization. We compare the results with Markov chain Monte Carlo (MCMC) in terms of accuracy and precision, and show that PEOA performs comparably well under the specific circumstances provided. Understandably, Bayesian inference and optimization serve distinct purposes, but comparing them highlights the potential of nature-inspired algorithms in cosmological analysis, offering alternative pathways to explore parameter spaces and validate standard results.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101026"},"PeriodicalIF":1.8,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1016/j.ascom.2025.101025
Pavel Kaygorodov , Ekaterina Malik , Dana Kovaleva , Oleg Malkov , Bernard Debray
Binary star DataBase BDB (http://bdb.inasan.ru) has a very long history and its internal design was changed twice during its lifetime. The first version was written in mid 90’s as CGI (Common Gateway Interface) shell scripts and used text files for data storage. Later it was rewritten in stackless Python with Nagare library. The next major update was performed during last year. The Nagare and other libraries were developing more and more compatibility issues, so we have decided to rewrite the BDB code using a completely new approach. In this paper we are presenting a brief introduction of this new approach to the distributed programming paradigm, which allows to significantly speedup the development. Here we employ the switch from the traditional Model-View-Controller approach to the distributed application, where the server is a “primary node” which controls many web-clients as “subordinate nodes”, delegating all User-Interface-related tasks to them.
{"title":"A new approach to web-programming: Binary star DataBase (BDB) engine","authors":"Pavel Kaygorodov , Ekaterina Malik , Dana Kovaleva , Oleg Malkov , Bernard Debray","doi":"10.1016/j.ascom.2025.101025","DOIUrl":"10.1016/j.ascom.2025.101025","url":null,"abstract":"<div><div>Binary star DataBase BDB (<span><span>http://bdb.inasan.ru</span><svg><path></path></svg></span>) has a very long history and its internal design was changed twice during its lifetime. The first version was written in mid 90’s as CGI (Common Gateway Interface) shell scripts and used text files for data storage. Later it was rewritten in stackless Python with Nagare library. The next major update was performed during last year. The Nagare and other libraries were developing more and more compatibility issues, so we have decided to rewrite the BDB code using a completely new approach. In this paper we are presenting a brief introduction of this new approach to the distributed programming paradigm, which allows to significantly speedup the development. Here we employ the switch from the traditional Model-View-Controller approach to the distributed application, where the server is a “primary node” which controls many web-clients as “subordinate nodes”, delegating all User-Interface-related tasks to them.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101025"},"PeriodicalIF":1.8,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03DOI: 10.1016/j.ascom.2025.101020
J. Li, B. Liang, S. Feng, W. Dai, S. Wei
Radio Frequency Interference (RFI) suppression is a crucial component of radio astronomical data processing. Accurate elimination of interference maintains the maximum observation purity for astronomical signals. Existing machine learning-based detection methods are overly reliant on fully labeled data, often requiring thousands of annotated samples to achieve satisfactory performance. Aiming at this limitation, we propose the Allspark-Unet model in this paper. It is a semi-supervised semantic segmentation network that incorporates a dedicated feature enhancement mechanism to reconstruct the feature representation of RFI signals. While achieving superior performance, the proposed architecture introduces a computational overhead compared to simpler baselines, representing a meaningful trade-off between performance gains and resource consumption. Experiments are conducted using a real dataset from the 40-meter radio telescope at Yunnan Observatory. Results demonstrate an accuracy of 0.98 with only 272 labeled data samples. Compared to the baseline method, an improvement of 1.52% in the F1 score (to 0.90) is achieved along with a 2.18% gain in the mean Intersection over Union (mIoU). Quantitative analysis reveals that Allspark-Unet effectively reduces the dependence on labeled data for RFI detection. The proposed feature reconstruction mechanism enables reliable interference detection even in small-sample scenarios. The detailed analysis of this performance-computational cost trade-off is presented and discussed in the study.
{"title":"RFI detection based on semi-supervised learning with improved Unet","authors":"J. Li, B. Liang, S. Feng, W. Dai, S. Wei","doi":"10.1016/j.ascom.2025.101020","DOIUrl":"10.1016/j.ascom.2025.101020","url":null,"abstract":"<div><div>Radio Frequency Interference (RFI) suppression is a crucial component of radio astronomical data processing. Accurate elimination of interference maintains the maximum observation purity for astronomical signals. Existing machine learning-based detection methods are overly reliant on fully labeled data, often requiring thousands of annotated samples to achieve satisfactory performance. Aiming at this limitation, we propose the Allspark-Unet model in this paper. It is a semi-supervised semantic segmentation network that incorporates a dedicated feature enhancement mechanism to reconstruct the feature representation of RFI signals. While achieving superior performance, the proposed architecture introduces a computational overhead compared to simpler baselines, representing a meaningful trade-off between performance gains and resource consumption. Experiments are conducted using a real dataset from the 40-meter radio telescope at Yunnan Observatory. Results demonstrate an accuracy of 0.98 with only 272 labeled data samples. Compared to the baseline method, an improvement of 1.52% in the F1 score (to 0.90) is achieved along with a 2.18% gain in the mean Intersection over Union (mIoU). Quantitative analysis reveals that Allspark-Unet effectively reduces the dependence on labeled data for RFI detection. The proposed feature reconstruction mechanism enables reliable interference detection even in small-sample scenarios. The detailed analysis of this performance-computational cost trade-off is presented and discussed in the study.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101020"},"PeriodicalIF":1.8,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1016/j.ascom.2025.101016
Alexis Heitzmann , María J. González Bonilla , Anja Bekkelien , Babatunde Akinsanmi , Mathias O.W. Beck , Nicolas Billot , Christopher Broeg , Adrien Deline , David Ehrenreich , Andrea Fortier , Marcus G.F. Kirsch , Monika Lendl , Nuria Alfaro Llorente , Naiara Fernández de Bobadilla Vallano , María Fuentes Tabas , Anthony G. Maldonado , Eva M. Vega Carrasco , David Modrego Contreras
The CHaracterising ExOPlanet Satellite (CHEOPS) is the first European Space Agency (ESA) small-class mission. It has been performing photometric astronomical observations with a particular emphasis on exoplanetary science for the past five years. A distinctive feature of CHEOPS is that the responsibility for all operational aspects of the mission lies with the CHEOPS consortium rather than ESA. As a result, all subsystems, their architecture, and operational processes have been independently developed and tailored specifically to CHEOPS. This paper offers an overview of the CHEOPS operational subsystems, the design, and the automation framework that compose the two main components of the CHEOPS ground segment: the Mission Operations Center (MOC) and the Science Operations Center (SOC). This comprehensive description of the CHEOPS workflow aims to serve as a reference and potential source of inspiration for future small and/or independent space missions.
{"title":"CHEOPS ground segment: Systems and automation for mission and science operations","authors":"Alexis Heitzmann , María J. González Bonilla , Anja Bekkelien , Babatunde Akinsanmi , Mathias O.W. Beck , Nicolas Billot , Christopher Broeg , Adrien Deline , David Ehrenreich , Andrea Fortier , Marcus G.F. Kirsch , Monika Lendl , Nuria Alfaro Llorente , Naiara Fernández de Bobadilla Vallano , María Fuentes Tabas , Anthony G. Maldonado , Eva M. Vega Carrasco , David Modrego Contreras","doi":"10.1016/j.ascom.2025.101016","DOIUrl":"10.1016/j.ascom.2025.101016","url":null,"abstract":"<div><div>The CHaracterising ExOPlanet Satellite (CHEOPS) is the first European Space Agency (ESA) small-class mission. It has been performing photometric astronomical observations with a particular emphasis on exoplanetary science for the past five years. A distinctive feature of CHEOPS is that the responsibility for all operational aspects of the mission lies with the CHEOPS consortium rather than ESA. As a result, all subsystems, their architecture, and operational processes have been independently developed and tailored specifically to CHEOPS. This paper offers an overview of the CHEOPS operational subsystems, the design, and the automation framework that compose the two main components of the CHEOPS ground segment: the Mission Operations Center (MOC) and the Science Operations Center (SOC). This comprehensive description of the CHEOPS workflow aims to serve as a reference and potential source of inspiration for future small and/or independent space missions.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101016"},"PeriodicalIF":1.8,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1016/j.ascom.2025.101019
A. Callejas-Tavera , E. Molino-Minero-Re , O. Valenzuela
The upcoming galaxy large-scale surveys, such as the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST), will generate photometry for billions of galaxies. The interpretation of large-scale weak lensing maps, as well as the estimation of galaxy clustering, requires reliable redshifts with high precision for multi-band photometry. However, obtaining spectroscopy for billions of galaxies is impractical and complex; therefore, having a sufficiently large number of galaxies with spectroscopic observations to train supervised algorithms for accurate redshift estimation is a significant challenge and an open research area. We propose a novel methodology called Co-SOM based on Co-training and Self-Organizing Maps (SOM), integrating labeled (sources with spectroscopic redshifts) and unlabeled (sources with photometric observations only) data during the training process, through a selection method based on map topology (connectivity structure of the SOM lattice) to leverage the limited spectroscopy available for photo-z estimation. We utilized the magnitudes and colors of Sloan Digital Sky Survey data release 18 (SDSS-DR18) to analyze and evaluate the performance, varying the proportion of labeled data and adjusting the training parameters. For training sets of 1% of labeled data ( galaxies) we achieved a performance of bias , precision and outlier fraction . Additionally, we conducted experiments varying the volume of labeled data, and the bias remains below , regardless of the size of the spectroscopic or photometric data. These low-redshift results demonstrate the potential of semi-supervised learning to address spectroscopic limitations in future photometric surveys.
即将到来的星系大规模调查,如维拉·鲁宾天文台的时空遗产调查(LSST),将产生数十亿星系的光度测量。大尺度弱透镜图的解释,以及星系群集的估计,需要可靠的红移和高精度的多波段光度测量。然而,获得数十亿星系的光谱是不切实际和复杂的;因此,有足够数量的星系和光谱观测来训练监督算法来准确地估计红移是一个重大的挑战和一个开放的研究领域。我们提出了一种基于协同训练和自组织地图(SOM)的新方法,通过基于地图拓扑(SOM晶格的连通性结构)的选择方法,在训练过程中整合标记(具有光谱红移的源)和未标记(仅具有光度观测的源)数据,以利用可用的有限光谱进行photo-z估计。我们利用Sloan Digital Sky Survey数据release 18 (SDSS-DR18)的星等和颜色来分析和评估性能,改变标记数据的比例并调整训练参数。对于1%标记数据(≈20,000个星系)的训练集,我们获得了偏差Δz=0.00007±0.00022,精度σzp=0.00063±0.00032,离群分数out_frac=0.02083±0.00027的性能。此外,我们进行了不同标记数据量的实验,无论光谱或光度数据的大小,偏差都保持在10−3以下。这些低红移结果证明了半监督学习在解决未来光度调查中光谱限制方面的潜力。
{"title":"Co-SOM: Co-training for photometric redshift estimation using Self-Organizing Maps","authors":"A. Callejas-Tavera , E. Molino-Minero-Re , O. Valenzuela","doi":"10.1016/j.ascom.2025.101019","DOIUrl":"10.1016/j.ascom.2025.101019","url":null,"abstract":"<div><div>The upcoming galaxy large-scale surveys, such as the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST), will generate photometry for billions of galaxies. The interpretation of large-scale weak lensing maps, as well as the estimation of galaxy clustering, requires reliable redshifts with high precision for multi-band photometry. However, obtaining spectroscopy for billions of galaxies is impractical and complex; therefore, having a sufficiently large number of galaxies with spectroscopic observations to train supervised algorithms for accurate redshift estimation is a significant challenge and an open research area. We propose a novel methodology called Co-SOM based on Co-training and Self-Organizing Maps (SOM), integrating labeled (sources with spectroscopic redshifts) and unlabeled (sources with photometric observations only) data during the training process, through a selection method based on map topology (connectivity structure of the SOM lattice) to leverage the limited spectroscopy available for photo-z estimation. We utilized the magnitudes and colors of Sloan Digital Sky Survey data release 18 (SDSS-DR18) to analyze and evaluate the performance, varying the proportion of labeled data and adjusting the training parameters. For training sets of 1% of labeled data (<span><math><mrow><mo>≈</mo><mn>20</mn><mo>,</mo><mn>000</mn></mrow></math></span> galaxies) we achieved a performance of bias <span><math><mrow><mi>Δ</mi><mi>z</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>00007</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>00022</mn></mrow></math></span>, precision <span><math><mrow><msub><mrow><mi>σ</mi></mrow><mrow><mi>z</mi><mi>p</mi></mrow></msub><mo>=</mo><mn>0</mn><mo>.</mo><mn>00063</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>00032</mn></mrow></math></span> and outlier fraction <span><math><mrow><mi>o</mi><mi>u</mi><mi>t</mi><mtext>_</mtext><mi>f</mi><mi>r</mi><mi>a</mi><mi>c</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>02083</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>00027</mn></mrow></math></span>. Additionally, we conducted experiments varying the volume of labeled data, and the bias remains below <span><math><mrow><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mo>−</mo><mn>3</mn></mrow></msup></mrow></math></span>, regardless of the size of the spectroscopic or photometric data. These low-redshift results demonstrate the potential of semi-supervised learning to address spectroscopic limitations in future photometric surveys.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101019"},"PeriodicalIF":1.8,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-22DOI: 10.1016/j.ascom.2025.101017
Shahid Chaudhary , Muhammad Danish Sultan , Asifa Ashraf , Ali M. Mubaraki , Saad Althobaiti , Farruh Atamurotov , Asif Mahmood
We consider recently developed AdS black hole in cosmologies to ascertain how accretion, graybody factors and scalar perturbations are effected by Hawking evaporation. We utilize Stefan–Boltzmann law to construct numerical plots exhibiting various evaporation patterns through distinct models. Our findings provide a realistic but distinct rate of mass loss across the models revealing the substantial impact of parameters as well as sensitivity of evaporation process to the underlying gravitational theory. We employ Novikov–Thorne model to investigate the thin accretion disks onto AdS black hole in cosmologies. We compute direct and secondary images of the black hole’s accretion disk at different observational angles. We observe that the considered model significantly effects the structure of accretion disks and gravitational lensing. Moreover, we explore time evolution of black hole under the influence of physical parameters. We infer the pattern of both the gradual and quick decay precipitated by varying geometric configuration in gravity. We observe that higher values of gravity parameters lower the greybody factor bound across all frequencies. This suggest that higher values of the parameters suppress the escape of radiation from the black hole.
{"title":"Exploring the effects of Hawking evaporation on accretion disk, greybody factors and scalar perturbations of AdS black hole in f(Q) cosmologies","authors":"Shahid Chaudhary , Muhammad Danish Sultan , Asifa Ashraf , Ali M. Mubaraki , Saad Althobaiti , Farruh Atamurotov , Asif Mahmood","doi":"10.1016/j.ascom.2025.101017","DOIUrl":"10.1016/j.ascom.2025.101017","url":null,"abstract":"<div><div>We consider recently developed AdS black hole in <span><math><mrow><mi>f</mi><mrow><mo>(</mo><mi>Q</mi><mo>)</mo></mrow></mrow></math></span> cosmologies to ascertain how accretion, graybody factors and scalar perturbations are effected by Hawking evaporation. We utilize Stefan–Boltzmann law to construct numerical plots exhibiting various evaporation patterns through distinct models. Our findings provide a realistic but distinct rate of mass loss across the models revealing the substantial impact of parameters as well as sensitivity of evaporation process to the underlying gravitational theory. We employ Novikov–Thorne model to investigate the thin accretion disks onto AdS black hole in <span><math><mrow><mi>f</mi><mrow><mo>(</mo><mi>Q</mi><mo>)</mo></mrow></mrow></math></span> cosmologies. We compute direct and secondary images of the black hole’s accretion disk at different observational angles. We observe that the considered model significantly effects the structure of accretion disks and gravitational lensing. Moreover, we explore time evolution of black hole under the influence of physical parameters. We infer the pattern of both the gradual and quick decay precipitated by varying geometric configuration in <span><math><mrow><mi>f</mi><mrow><mo>(</mo><mi>Q</mi><mo>)</mo></mrow></mrow></math></span> gravity. We observe that higher values of <span><math><mrow><mi>f</mi><mrow><mo>(</mo><mi>Q</mi><mo>)</mo></mrow></mrow></math></span> gravity parameters lower the greybody factor bound across all frequencies. This suggest that higher values of the parameters suppress the escape of radiation from the black hole.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101017"},"PeriodicalIF":1.8,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-22DOI: 10.1016/j.ascom.2025.101018
Theophilus Ansah-Narh , Jordan Lontsi Tedongmo , Joseph Bremang Tandoh , Nia Imara , Ezekiel Nii Noye Nortey
The classification of radio galaxies is central to understanding galaxy evolution, active galactic nuclei dynamics, and the large-scale structure of the universe. However, traditional manual techniques are inadequate for processing the massive, heterogeneous datasets generated by modern radio surveys. In this study, we present a probabilistic machine learning framework that integrates Singular Value Decomposition (SVD) for feature extraction with Bayesian ensemble learning to achieve robust, scalable radio galaxy classification. The SVD approach effectively reduces dimensionality while preserving key morphological structures, enabling efficient representation of galaxy features. To mitigate class imbalance and avoid the introduction of artefacts, we incorporate a Local Neighbourhood Encoding strategy tailored to the astrophysical distribution of galaxy types. The resulting features are used to train and optimise several baseline classifiers: Logistic Regression, Support Vector Machines, LightGBM, and Multi-Layer Perceptrons within bagging, boosting, and stacking ensembles governed by a Bayesian weighting scheme. Our results demonstrate that Bayesian ensembles outperform their traditional counterparts across all metrics, with the Bayesian stacking model achieving a classification accuracy of 99.0% and an F1-score of 0.99 across Compact, Bent, Fanaroff–Riley Type I (FR-I), and Type II (FR-II) sources. Interpretability is enhanced through SHAP analysis, which highlights the principal components most associated with morphological distinctions. Beyond improving classification performance, our framework facilitates uncertainty quantification, paving the way for more reliable integration into next-generation survey pipelines. This work contributes a reproducible and interpretable methodology for automated galaxy classification in the era of data-intensive radio astronomy.
{"title":"Decoding the Radio Sky: Bayesian ensemble learning and SVD-based feature extraction for automated radio galaxy classification","authors":"Theophilus Ansah-Narh , Jordan Lontsi Tedongmo , Joseph Bremang Tandoh , Nia Imara , Ezekiel Nii Noye Nortey","doi":"10.1016/j.ascom.2025.101018","DOIUrl":"10.1016/j.ascom.2025.101018","url":null,"abstract":"<div><div>The classification of radio galaxies is central to understanding galaxy evolution, active galactic nuclei dynamics, and the large-scale structure of the universe. However, traditional manual techniques are inadequate for processing the massive, heterogeneous datasets generated by modern radio surveys. In this study, we present a probabilistic machine learning framework that integrates Singular Value Decomposition (SVD) for feature extraction with Bayesian ensemble learning to achieve robust, scalable radio galaxy classification. The SVD approach effectively reduces dimensionality while preserving key morphological structures, enabling efficient representation of galaxy features. To mitigate class imbalance and avoid the introduction of artefacts, we incorporate a Local Neighbourhood Encoding strategy tailored to the astrophysical distribution of galaxy types. The resulting features are used to train and optimise several baseline classifiers: Logistic Regression, Support Vector Machines, LightGBM, and Multi-Layer Perceptrons within bagging, boosting, and stacking ensembles governed by a Bayesian weighting scheme. Our results demonstrate that Bayesian ensembles outperform their traditional counterparts across all metrics, with the Bayesian stacking model achieving a classification accuracy of 99.0% and an F1-score of 0.99 across Compact, Bent, Fanaroff–Riley Type I (FR-I), and Type II (FR-II) sources. Interpretability is enhanced through SHAP analysis, which highlights the principal components most associated with morphological distinctions. Beyond improving classification performance, our framework facilitates uncertainty quantification, paving the way for more reliable integration into next-generation survey pipelines. This work contributes a reproducible and interpretable methodology for automated galaxy classification in the era of data-intensive radio astronomy.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"54 ","pages":"Article 101018"},"PeriodicalIF":1.8,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}