Billy Shrive, Pollacco Don, Paul Chote, James A. Blake, B. Cooke, James McCormac, R. West, Robert Airey, Alex MacManus, Phineas Allen
As the cost of reaching LEO has diminished we expect, over the next decade, an almost exponential increase in the numbers of LEO spacecraft from established and potentially new agents. Remote characterisation of these and the increasing numbers of decommissioned/debris satellites is thus becoming more important, along with identifying unannounced changes in megaconstellations. In this paper we examine the light curves of known LEO platforms with a boosted tree algorithm in order to determine whether spacecraft properties were discernible. A-priori we expected little correlation as we expected the large variations in sight-line geometries would mask signs from the spacecraft. Using large numbers of lightcurves from the MMT-9 database, we find that this is not the case and most platforms are statistically identifiable in most sight-lines and tentatively associate this correlation with the differences and similarities between downward facing instruments. Pairs of satellite platforms can be distinguished 86.13 per cent (N = 15 600) of the time using this method. Evolutionary changes to the Starlink satellite platform are also distinguished.
{"title":"Classifying LEO satellite platforms with boosted decision trees","authors":"Billy Shrive, Pollacco Don, Paul Chote, James A. Blake, B. Cooke, James McCormac, R. West, Robert Airey, Alex MacManus, Phineas Allen","doi":"10.1093/rasti/rzae018","DOIUrl":"https://doi.org/10.1093/rasti/rzae018","url":null,"abstract":"\u0000 As the cost of reaching LEO has diminished we expect, over the next decade, an almost exponential increase in the numbers of LEO spacecraft from established and potentially new agents. Remote characterisation of these and the increasing numbers of decommissioned/debris satellites is thus becoming more important, along with identifying unannounced changes in megaconstellations. In this paper we examine the light curves of known LEO platforms with a boosted tree algorithm in order to determine whether spacecraft properties were discernible. A-priori we expected little correlation as we expected the large variations in sight-line geometries would mask signs from the spacecraft. Using large numbers of lightcurves from the MMT-9 database, we find that this is not the case and most platforms are statistically identifiable in most sight-lines and tentatively associate this correlation with the differences and similarities between downward facing instruments. Pairs of satellite platforms can be distinguished 86.13 per cent (N = 15 600) of the time using this method. Evolutionary changes to the Starlink satellite platform are also distinguished.","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":"16 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140981550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PyExoCross is a Python adaptation of the ExoCross Fortran application (Yurchenko, A&A, 614, A131 (2018)). PyExoCross is designed for postprocessing the huge molecular line lists generated by the ExoMol project and other similar initiatives such as the HITRAN and HITEMP databases. PyExoCross generates absorption and emission stick spectra, cross sections and other properties (partition functions, specific heats, cooling functions, lifetimes and oscillator strengths) based on molecular line lists. PyExoCross calculates cross sections with four line profiles: Doppler, Gaussian, Lorentzian and Voigt profiles in both sampling and binned methods; a number of options are available for computing Voigt profiles which we test for speed and accuracy. PyExoCross supports importing and exporting line lists in the ExoMol and HITRAN/HITEMP formats. PyExoCross also provides conversion between the ExoMol and HITRAN data format. In addition, PyExoCross has extra code for users to automate the batch download of line list files from the ExoMol database.
{"title":"PyExoCross: a Python program for generating spectra and cross sections from molecular line lists","authors":"Jingxin Zhang, J. Tennyson, S. Yurchenko","doi":"10.1093/rasti/rzae016","DOIUrl":"https://doi.org/10.1093/rasti/rzae016","url":null,"abstract":"\u0000 PyExoCross is a Python adaptation of the ExoCross Fortran application (Yurchenko, A&A, 614, A131 (2018)). PyExoCross is designed for postprocessing the huge molecular line lists generated by the ExoMol project and other similar initiatives such as the HITRAN and HITEMP databases. PyExoCross generates absorption and emission stick spectra, cross sections and other properties (partition functions, specific heats, cooling functions, lifetimes and oscillator strengths) based on molecular line lists. PyExoCross calculates cross sections with four line profiles: Doppler, Gaussian, Lorentzian and Voigt profiles in both sampling and binned methods; a number of options are available for computing Voigt profiles which we test for speed and accuracy. PyExoCross supports importing and exporting line lists in the ExoMol and HITRAN/HITEMP formats. PyExoCross also provides conversion between the ExoMol and HITRAN data format. In addition, PyExoCross has extra code for users to automate the batch download of line list files from the ExoMol database.","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":"63 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140664084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Miller, P. W. Lucas, Y. Sun, Z. Guo, W. J. Cooper, C. Morris
The ability to automatically and robustly self-verify periodicity present in time-series astronomical data is becoming more important as data sets rapidly increase in size. The age of large astronomical surveys has rendered manual inspection of time-series data less practical. Previous efforts in generating a false alarm probability to verify the periodicity of stars have been aimed towards the analysis of a constructed periodogram. However, these methods feature correlations with features that do not pertain to periodicity, such as light curve shape, slow trends and stochastic variability. The common assumption that photometric errors are Gaussian and well determined is also a limitation of analytic methods. We present a novel machine learning based technique which directly analyses the phase folded light curve for its false alarm probability. We show that the results of this method are largely insensitive to the shape of the light curve, and we establish minimum values for the number of data points and the amplitude to noise ratio.
{"title":"The verification of periodicity with the use of recurrent neural networks","authors":"N. Miller, P. W. Lucas, Y. Sun, Z. Guo, W. J. Cooper, C. Morris","doi":"10.1093/rasti/rzae015","DOIUrl":"https://doi.org/10.1093/rasti/rzae015","url":null,"abstract":"\u0000 The ability to automatically and robustly self-verify periodicity present in time-series astronomical data is becoming more important as data sets rapidly increase in size. The age of large astronomical surveys has rendered manual inspection of time-series data less practical. Previous efforts in generating a false alarm probability to verify the periodicity of stars have been aimed towards the analysis of a constructed periodogram. However, these methods feature correlations with features that do not pertain to periodicity, such as light curve shape, slow trends and stochastic variability. The common assumption that photometric errors are Gaussian and well determined is also a limitation of analytic methods. We present a novel machine learning based technique which directly analyses the phase folded light curve for its false alarm probability. We show that the results of this method are largely insensitive to the shape of the light curve, and we establish minimum values for the number of data points and the amplitude to noise ratio.","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":"10 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140671769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Space-based photometry missions produce exquisite light curves that contain a wealth of stellar variability on a wide range of timescales. Light curves also typically contain significant instrumental systematics – spurious, non-astrophysical trends that are common, in varying degrees, to many light curves. Empirical systematics-correction approaches using the information in the light curves themselves have been very successful, but tend to suppress astrophysical signals, particularly on longer timescales. Unlike its predecessors, the PLATO mission will use multiple cameras to monitor the same stars. We present REPUBLIC, a novel systematics-correction algorithm which exploits this multi-camera configuration to correct systematics that differ between cameras, while preserving the component of each star’s signal that is common to all cameras, regardless of timescale. Through simulations with astrophysical signals (star spots and planetary transits), Kepler-like errors, and white noise, we demonstrate REPUBLIC’s ability to preserve long-term astrophysical signals usually lost in standard correction techniques. We also explore REPUBLIC’s performance with different number of cameras and systematic properties. We conclude that REPUBLIC should be considered a potential complement to existing strategies for systematic correction in multi-camera surveys, with its utility contingent upon further validation and adaptation to the specific characteristics of the PLATO mission data.
{"title":"REPUBLIC: A variability-preserving systematic-correction algorithm for PLATO’s multi-camera light curves","authors":"Oscar Barrag'an, S. Aigrain, J. McCormac","doi":"10.1093/rasti/rzae014","DOIUrl":"https://doi.org/10.1093/rasti/rzae014","url":null,"abstract":"\u0000 Space-based photometry missions produce exquisite light curves that contain a wealth of stellar variability on a wide range of timescales. Light curves also typically contain significant instrumental systematics – spurious, non-astrophysical trends that are common, in varying degrees, to many light curves. Empirical systematics-correction approaches using the information in the light curves themselves have been very successful, but tend to suppress astrophysical signals, particularly on longer timescales. Unlike its predecessors, the PLATO mission will use multiple cameras to monitor the same stars. We present REPUBLIC, a novel systematics-correction algorithm which exploits this multi-camera configuration to correct systematics that differ between cameras, while preserving the component of each star’s signal that is common to all cameras, regardless of timescale. Through simulations with astrophysical signals (star spots and planetary transits), Kepler-like errors, and white noise, we demonstrate REPUBLIC’s ability to preserve long-term astrophysical signals usually lost in standard correction techniques. We also explore REPUBLIC’s performance with different number of cameras and systematic properties. We conclude that REPUBLIC should be considered a potential complement to existing strategies for systematic correction in multi-camera surveys, with its utility contingent upon further validation and adaptation to the specific characteristics of the PLATO mission data.","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":"91 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140726110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Observations with spacecraft-mounted instruments are usually limited by their field-of-view and are often affected by the spacecraft's shadow or wake. Their extent though can be derived from the spacecraft's geometry. In this work we present a robust method for calculating the field-of-view as well as the extent of a spacecraft shadow and wake from readily available spacecraft CAD models. We demonstrate these principles on Cassini, where we give examples of vector-spacecraft intersection for the Cassini Langmuir Probe, as well the field-of-view of the Langmuir Probe and the Cassini Plasma Spectrometer.
{"title":"A simple spacecraft – vector intersection methodology and applications","authors":"Georgios Xystouris, Oleg Shebanits, C. Arridge","doi":"10.1093/rasti/rzae012","DOIUrl":"https://doi.org/10.1093/rasti/rzae012","url":null,"abstract":"\u0000 Observations with spacecraft-mounted instruments are usually limited by their field-of-view and are often affected by the spacecraft's shadow or wake. Their extent though can be derived from the spacecraft's geometry.\u0000 In this work we present a robust method for calculating the field-of-view as well as the extent of a spacecraft shadow and wake from readily available spacecraft CAD models. We demonstrate these principles on Cassini, where we give examples of vector-spacecraft intersection for the Cassini Langmuir Probe, as well the field-of-view of the Langmuir Probe and the Cassini Plasma Spectrometer.","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":" 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140386375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victoria Da Poian, E. I. Lyness, J. Y. Qi, I. Shah, G. Lipstein, P. D. Archer, L. Chou, C. Freissinet, C. Malespin, A. McAdam, C. A. Knudson, B. P. Theiling, S. M. H”orst
We set up two open-science machine learning (ML) challenges focusing on building models to automatically analyze mass spectrometry (MS) data for Mars exploration. ML challenges provide an excellent way to engage a diverse set of experts with benchmark training data, explore a wide range of ML and data science approaches, and identify promising models based on empirical results, as well as to get independent external analyses to compare to those of the internal team. These two challenges were proof-of-concept projects to analyze the feasibility of combining data collected from different instruments in a single ML application. We selected mass spectrometry data from 1) commercial instruments and 2) the Sample Analysis at Mars (SAM, an instrument suite that includes a mass spectrometer subsystem onboard the Curiosity rover) testbed. These challenges, organized with DrivenData, gathered more than 1,150 unique participants from all over the world, and obtained more than 600 solutions contributing powerful models to the analysis of rock and soil samples relevant to planetary science using various mass spectrometry datasets. These two challenges demonstrated the suitability and value of multiple ML approaches to classifying planetary analog datasets from both commercial and flight-like instruments. We present the processes from the problem identification, challenge setups, and challenge results that gathered creative and diverse solutions from worldwide participants, in some cases with no backgrounds in mass spectrometry. We also present the potential and limitations of these solutions for ML application in future planetary missions. Our longer-term goal is to deploy these powerful methods onboard the spacecraft to autonomously guide space operations and reduce ground-in-the-loop reliance.
我们设立了两个开放科学机器学习(ML)挑战赛,重点是为火星探测建立自动分析质谱(MS)数据的模型。机器学习挑战赛提供了一种极好的方式,让不同的专家利用基准训练数据参与其中,探索各种机器学习和数据科学方法,并根据经验结果确定有前途的模型,同时获得独立的外部分析结果,以便与内部团队的分析结果进行比较。这两项挑战是概念验证项目,旨在分析在单一 ML 应用程序中结合从不同仪器收集的数据的可行性。我们选择的质谱数据来自:1)商业仪器;2)火星样本分析(SAM,一种包括好奇号漫游车搭载的质谱仪子系统在内的仪器套件)试验台。与DrivenData共同组织的这些挑战赛聚集了来自世界各地的1150多名参与者,并获得了600多个解决方案,这些解决方案为利用各种质谱数据集分析与行星科学相关的岩石和土壤样本提供了强大的模型。这两项挑战展示了多种 ML 方法的适用性和价值,可用于对商用仪器和飞行类仪器的行星模拟数据集进行分类。我们介绍了从问题识别、挑战设置到挑战结果的过程,这些过程汇集了来自世界各地参与者的创造性和多样化的解决方案,在某些情况下,这些参与者并没有质谱分析的背景。我们还介绍了这些解决方案在未来行星任务中应用于质谱分析的潜力和局限性。我们的长期目标是在航天器上部署这些强大的方法,以自主指导太空操作,减少对地面的依赖。
{"title":"Leveraging open science machine learning challenges for data constrained planetary mission instruments","authors":"Victoria Da Poian, E. I. Lyness, J. Y. Qi, I. Shah, G. Lipstein, P. D. Archer, L. Chou, C. Freissinet, C. Malespin, A. McAdam, C. A. Knudson, B. P. Theiling, S. M. H”orst","doi":"10.1093/rasti/rzae009","DOIUrl":"https://doi.org/10.1093/rasti/rzae009","url":null,"abstract":"\u0000 We set up two open-science machine learning (ML) challenges focusing on building models to automatically analyze mass spectrometry (MS) data for Mars exploration. ML challenges provide an excellent way to engage a diverse set of experts with benchmark training data, explore a wide range of ML and data science approaches, and identify promising models based on empirical results, as well as to get independent external analyses to compare to those of the internal team. These two challenges were proof-of-concept projects to analyze the feasibility of combining data collected from different instruments in a single ML application. We selected mass spectrometry data from 1) commercial instruments and 2) the Sample Analysis at Mars (SAM, an instrument suite that includes a mass spectrometer subsystem onboard the Curiosity rover) testbed. These challenges, organized with DrivenData, gathered more than 1,150 unique participants from all over the world, and obtained more than 600 solutions contributing powerful models to the analysis of rock and soil samples relevant to planetary science using various mass spectrometry datasets. These two challenges demonstrated the suitability and value of multiple ML approaches to classifying planetary analog datasets from both commercial and flight-like instruments.\u0000 We present the processes from the problem identification, challenge setups, and challenge results that gathered creative and diverse solutions from worldwide participants, in some cases with no backgrounds in mass spectrometry. We also present the potential and limitations of these solutions for ML application in future planetary missions. Our longer-term goal is to deploy these powerful methods onboard the spacecraft to autonomously guide space operations and reduce ground-in-the-loop reliance.","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":"62 S1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140238226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Metha, S. Birrer, T. Treu, M. Trenti, Xuheng Ding, Xin Wang
https://github.com/astrobenji/lenstronomy-metals-notebooks Historically, metallicity profiles of galaxies have been modelled using a radially symmetric, two-parameter linear model, which reveals that most galaxies are more metal-rich in their central regions than their outskirts. However, this model is known to yield inaccurate results when the point-spread function (PSF) of a telescope is large. Furthermore, a radially symmetric model cannot capture asymmetric structures within a galaxy. In this work, we present an extension of the popular forward-modelling python package lenstronomy, which allows the user to overcome both of these obstacles. We demonstrate the new features of this code base through two illustrative examples on simulated data. First, we show that through forward modelling, lenstronomy is able to recover accurately the metallicity gradients of galaxies, even when the PSF is comparable to the size of a galaxy, as long as the data is observed with a sufficient number of pixels. Additionally, we demonstrate how lenstronomy is able to fit irregular metallicity profiles to galaxies that are not well-described by a simple surface brightness profile. This opens up pathways for detailed investigations into the connections between morphology and chemical structure for galaxies at cosmological distances using the transformative capabilities of JWST. Our code is publicly available and open source, and can also be used to model spatial distributions of other galaxy properties that are traced by its surface brightness profile
{"title":"A forward-modelling approach to overcome PSF smearing and fit flexible models to the chemical structure of galaxies","authors":"Benjamin Metha, S. Birrer, T. Treu, M. Trenti, Xuheng Ding, Xin Wang","doi":"10.1093/rasti/rzae010","DOIUrl":"https://doi.org/10.1093/rasti/rzae010","url":null,"abstract":"\u0000 https://github.com/astrobenji/lenstronomy-metals-notebooks Historically, metallicity profiles of galaxies have been modelled using a radially symmetric, two-parameter linear model, which reveals that most galaxies are more metal-rich in their central regions than their outskirts. However, this model is known to yield inaccurate results when the point-spread function (PSF) of a telescope is large. Furthermore, a radially symmetric model cannot capture asymmetric structures within a galaxy. In this work, we present an extension of the popular forward-modelling python package lenstronomy, which allows the user to overcome both of these obstacles. We demonstrate the new features of this code base through two illustrative examples on simulated data. First, we show that through forward modelling, lenstronomy is able to recover accurately the metallicity gradients of galaxies, even when the PSF is comparable to the size of a galaxy, as long as the data is observed with a sufficient number of pixels. Additionally, we demonstrate how lenstronomy is able to fit irregular metallicity profiles to galaxies that are not well-described by a simple surface brightness profile. This opens up pathways for detailed investigations into the connections between morphology and chemical structure for galaxies at cosmological distances using the transformative capabilities of JWST. Our code is publicly available and open source, and can also be used to model spatial distributions of other galaxy properties that are traced by its surface brightness profile","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140245873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. McDonald, Albert Zijlstra, Nick L. J. Cox, Emma L. Alexander, Alexander Csukai, Ria Ramkumar, Alexander Hollings
Stellar atmosphere modelling predicts the luminosity and temperature of a star, together with parameters such as the effective gravity and the metallicity, by reproducing the observed spectral energy distribution. Most observational data comes from photometric surveys, using a variety of passbands. We herein present the Python Stellar Spectral Energy Distribution (PySSED) routine, designed to combine photometry from disparate catalogues, fit the luminosity and temperature of stars, and determine departures from stellar atmosphere models such as infrared or ultraviolet excess. We detail the routine’s operation, and present use cases on both individual stars, stellar populations, and wider regions of the sky. PySSED benefits from fully automated processing, allowing fitting of arbitrarily large datasets at the rate of a few seconds per star.
{"title":"PySSED: An automated method of collating and fitting stellar spectral energy distributions","authors":"I. McDonald, Albert Zijlstra, Nick L. J. Cox, Emma L. Alexander, Alexander Csukai, Ria Ramkumar, Alexander Hollings","doi":"10.1093/rasti/rzae005","DOIUrl":"https://doi.org/10.1093/rasti/rzae005","url":null,"abstract":"\u0000 Stellar atmosphere modelling predicts the luminosity and temperature of a star, together with parameters such as the effective gravity and the metallicity, by reproducing the observed spectral energy distribution. Most observational data comes from photometric surveys, using a variety of passbands. We herein present the Python Stellar Spectral Energy Distribution (PySSED) routine, designed to combine photometry from disparate catalogues, fit the luminosity and temperature of stars, and determine departures from stellar atmosphere models such as infrared or ultraviolet excess. We detail the routine’s operation, and present use cases on both individual stars, stellar populations, and wider regions of the sky. PySSED benefits from fully automated processing, allowing fitting of arbitrarily large datasets at the rate of a few seconds per star.","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":"26 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140450002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shu-Ping Yan, Li Ji, Ping Zhang, Siming Liu, Lei Lu, Min Long
Time-frequency analysis could provide detailed dynamic information of celestial bodies and is critical for comprehension of astronomical phenomena. However, it is far from being well-developed in astronomy. Hilbert-Huang transform (HHT) is an advanced time-frequency method but has two problems in analyzing astronomical signals. One is that many astronomical signals may be composed of multiple components with various amplitudes and frequencies, while HHT uses assisted noises with the same amplitude to extract all components. The other is that HHT is an empirical method requiring tunable parameters to be optimized using experimental results or known facts, which are challenging to obtain in astronomy and it is therefore hard to determine whether the signal decomposition is right or not. In this study, we adjust the noise amplitude to optimize the decomposition based on the orthogonality of the obtained components and discard the decompositions with non-physical results. Three experiments show that this new extension of HHT is an effective method suitable for high-resolution time-frequency analysis in astronomy. It can be used to dig out valuable information which are inaccessible with other methods, and thus has the potential to open up new avenues for astronomy research.
{"title":"A promising method for breaking the logjam of time-frequency analysis in astronomy","authors":"Shu-Ping Yan, Li Ji, Ping Zhang, Siming Liu, Lei Lu, Min Long","doi":"10.1093/rasti/rzae001","DOIUrl":"https://doi.org/10.1093/rasti/rzae001","url":null,"abstract":"\u0000 Time-frequency analysis could provide detailed dynamic information of celestial bodies and is critical for comprehension of astronomical phenomena. However, it is far from being well-developed in astronomy. Hilbert-Huang transform (HHT) is an advanced time-frequency method but has two problems in analyzing astronomical signals. One is that many astronomical signals may be composed of multiple components with various amplitudes and frequencies, while HHT uses assisted noises with the same amplitude to extract all components. The other is that HHT is an empirical method requiring tunable parameters to be optimized using experimental results or known facts, which are challenging to obtain in astronomy and it is therefore hard to determine whether the signal decomposition is right or not. In this study, we adjust the noise amplitude to optimize the decomposition based on the orthogonality of the obtained components and discard the decompositions with non-physical results. Three experiments show that this new extension of HHT is an effective method suitable for high-resolution time-frequency analysis in astronomy. It can be used to dig out valuable information which are inaccessible with other methods, and thus has the potential to open up new avenues for astronomy research.","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":"40 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139598023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to: Personalized anomaly detection using deep active learning","authors":"","doi":"10.1093/rasti/rzae008","DOIUrl":"https://doi.org/10.1093/rasti/rzae008","url":null,"abstract":"","PeriodicalId":367327,"journal":{"name":"RAS Techniques and Instruments","volume":"20 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140517766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}