首页 > 最新文献

Astronomy and Computing最新文献

英文 中文
MongoDB scalability for astronomical time series: The POEMAS solar radio telescope evaluation without HPC MongoDB用于天文时间序列的可扩展性:POEMAS太阳射电望远镜评估,不需要高性能计算
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-27 DOI: 10.1016/j.ascom.2025.101053
W. Conde , A. Valio , C.G.G. de Castro
The increasing temporal resolution and structural diversity of modern solar instruments place growing demands on database systems used in observational astronomy. At the Center for Radio Astronomy and Astrophysics Mackenzie (CRAAM), this challenge is amplified by the need to consolidate heterogeneous data streams from multiple telescopes within a single virtual machine. With only 32 GB of RAM available (16 GB allocated to the database), a central design question emerged: when restricted to a single physical host, can a virtualized sharded cluster offer practical scalability advantages over a standalone deployment? To investigate this, we conducted an empirical evaluation of MongoDB using 10 ms observations from the POEMAS radiotelescope, tested at volumes of 15M, 150M, and 500M documents. Results show that, although sharding introduces coordination overhead for selective queries, it provides substantial gains for global aggregations, achieving speedups above 600× while maintaining compression ratios near 85%. The analysis identifies an operational threshold of roughly 150 million documents per collection to sustain stable performance under the available resources. Based on these findings, the same single-node configuration used in the benchmarks was employed to process the full historical POEMAS dataset, totaling 3.3 billion records and producing approximately 50 GB of consolidated FITS products. These products and their associated metadata are made available to the community through a cloud-hosted portal with reduced operational cost. This work documents practical scalability boundaries for astronomical time-series in resource-constrained environments and supports the deployment currently operating at CRAAM.
现代太阳仪器的时间分辨率和结构多样性的不断提高,对观测天文学中使用的数据库系统提出了越来越高的要求。在射电天文学和天体物理中心麦肯齐(CRAAM),由于需要在单个虚拟机上整合来自多个望远镜的异构数据流,这一挑战被放大了。由于只有32 GB可用RAM (16 GB分配给数据库),一个核心设计问题出现了:当限制在单个物理主机上时,虚拟化分片集群是否能够提供比独立部署更实用的可伸缩性优势?为了研究这一点,我们使用POEMAS射电望远镜的10毫秒观测数据对MongoDB进行了实证评估,测试了15M、150M和500M的文档。结果表明,尽管切分为选择性查询引入了协调开销,但它为全局聚合提供了可观的收益,实现了超过600x的速度,同时保持了接近85%的压缩比。分析确定了一个操作阈值,大约为每个集合1.5亿个文档,以便在可用资源下保持稳定的性能。基于这些发现,我们使用基准测试中使用的相同单节点配置来处理完整的历史POEMAS数据集,总共33亿条记录,并产生大约50 GB的整合FITS产品。这些产品及其相关元数据可通过云托管门户提供给社区,从而降低了运营成本。这项工作记录了资源受限环境下天文时间序列的实际可扩展性边界,并支持目前在CRAAM运行的部署。
{"title":"MongoDB scalability for astronomical time series: The POEMAS solar radio telescope evaluation without HPC","authors":"W. Conde ,&nbsp;A. Valio ,&nbsp;C.G.G. de Castro","doi":"10.1016/j.ascom.2025.101053","DOIUrl":"10.1016/j.ascom.2025.101053","url":null,"abstract":"<div><div>The increasing temporal resolution and structural diversity of modern solar instruments place growing demands on database systems used in observational astronomy. At the Center for Radio Astronomy and Astrophysics Mackenzie (CRAAM), this challenge is amplified by the need to consolidate heterogeneous data streams from multiple telescopes within a single virtual machine. With only 32<!--> <!-->GB of RAM available (16<!--> <!-->GB allocated to the database), a central design question emerged: when restricted to a single physical host, can a virtualized sharded cluster offer practical scalability advantages over a standalone deployment? To investigate this, we conducted an empirical evaluation of MongoDB using 10<!--> <!-->ms observations from the POEMAS radiotelescope, tested at volumes of 15M, 150M, and 500M documents. Results show that, although sharding introduces coordination overhead for selective queries, it provides substantial gains for global aggregations, achieving speedups above 600<span><math><mo>×</mo></math></span> while maintaining compression ratios near 85%. The analysis identifies an operational threshold of roughly 150 million documents per collection to sustain stable performance under the available resources. Based on these findings, the same single-node configuration used in the benchmarks was employed to process the full historical POEMAS dataset, totaling 3.3 billion records and producing approximately 50<!--> <!-->GB of consolidated FITS products. These products and their associated metadata are made available to the community through a cloud-hosted portal with reduced operational cost. This work documents practical scalability boundaries for astronomical time-series in resource-constrained environments and supports the deployment currently operating at CRAAM.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101053"},"PeriodicalIF":1.8,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entropy-based properties and Bayesian modeling of the Xexponential Distribution with applications in astronomy 基于熵的性质和x指数分布的贝叶斯建模及其在天文学中的应用
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-27 DOI: 10.1016/j.ascom.2025.101054
Abdelfateh Beghriche , Zineb Azouz , Halim Zeghdoudi
This study introduces and systematically investigates the X-Exponential Distribution (XED), a one-parameter lifetime model that generalizes the exponential–gamma mixture structure. We derive closed-form expressions and simulation-based evaluations of several entropy measures, including Shannon, Rényi, and Tsallis entropy, as well as information divergence measures such as the Kullback–Leibler divergence. Special emphasis is placed on the distribution’s flexibility in modeling stochastic uncertainty and its intermediate tail behavior, which arises naturally from its representation as a mixture of exponential and gamma components. Furthermore, the mean and variance are obtained in explicit form, and connections to reliability modeling are highlighted. A Bayesian framework is outlined to incorporate prior uncertainty on the rate parameter, allowing hierarchical extensions. Application to lifetime and reliability data illustrates that the XED provides superior goodness-of-fit compared to classical exponential models, confirming its usefulness in applied probability, reliability engineering, and related domains.
本文介绍并系统地研究了x -指数分布(X-Exponential Distribution, XED),这是一种推广指数-伽马混合结构的单参数寿命模型。我们推导了几种熵测度的封闭表达式和基于模拟的评价,包括Shannon熵、rsamunyi熵和Tsallis熵,以及信息散度测度,如Kullback-Leibler散度。特别强调分布在建模随机不确定性及其中间尾行为方面的灵活性,这是由指数和伽马分量的混合表示自然产生的。此外,均值和方差以显式形式得到,并强调了与可靠性建模的联系。概述了一个贝叶斯框架,以结合速率参数的先验不确定性,允许分层扩展。对寿命和可靠性数据的应用表明,与经典指数模型相比,XED具有更好的拟合优度,证实了其在应用概率、可靠性工程和相关领域的实用性。
{"title":"Entropy-based properties and Bayesian modeling of the Xexponential Distribution with applications in astronomy","authors":"Abdelfateh Beghriche ,&nbsp;Zineb Azouz ,&nbsp;Halim Zeghdoudi","doi":"10.1016/j.ascom.2025.101054","DOIUrl":"10.1016/j.ascom.2025.101054","url":null,"abstract":"<div><div>This study introduces and systematically investigates the X-Exponential Distribution (XED), a one-parameter lifetime model that generalizes the exponential–gamma mixture structure. We derive closed-form expressions and simulation-based evaluations of several entropy measures, including Shannon, Rényi, and Tsallis entropy, as well as information divergence measures such as the Kullback–Leibler divergence. Special emphasis is placed on the distribution’s flexibility in modeling stochastic uncertainty and its intermediate tail behavior, which arises naturally from its representation as a mixture of exponential and gamma components. Furthermore, the mean and variance are obtained in explicit form, and connections to reliability modeling are highlighted. A Bayesian framework is outlined to incorporate prior uncertainty on the rate parameter, allowing hierarchical extensions. Application to lifetime and reliability data illustrates that the XED provides superior goodness-of-fit compared to classical exponential models, confirming its usefulness in applied probability, reliability engineering, and related domains.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101054"},"PeriodicalIF":1.8,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hunting the outliers: Machine learning for anomalous time series detection 寻找异常值:异常时间序列检测的机器学习
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-23 DOI: 10.1016/j.ascom.2025.101049
N. De Bonis , S. Vaccaro , Y. Maruccia , G. Riccio , R. Crupi , S. Rubini , D. De Cicco , M. Brescia , S. Cavuoti
The increasing availability of large-scale time series datasets from modern astronomical surveys, such as those provided by Gaia and the forthcoming Legacy Survey of Space and Time (LSST) to be conducted with the Simonyi Survey Telescope at the Vera C. Rubin Observatory, is transforming time-domain astrophysics, enabling the systematic study of variable and transient phenomena across billions of sources. However, the sheer volume and heterogeneity of these data present significant challenges for traditional analysis techniques. Feature-based representations have emerged as a powerful solution, allowing the application of machine learning methods for efficient characterization and classification of astrophysical sources, including the reliable identification of Active Galactic Nuclei (AGNs). In this work, we introduce a general-purpose methodology for anomaly detection in time series data that transfers this feature engineering framework to the financial domain, where time series display complexities, such as noise and irregular patterns, closely resembling those in astrophysics. By combining domain-informed features with unsupervised algorithms (specifically Isolation Forests and Autoencoders), our approach effectively detects anomalous time series relative to the sample studied, demonstrating strong performance and highlighting its cross-domain transferability. Moreover, we propose an extension based on adaptive temporal windows to localize anomalies at a finer temporal resolution, further enhancing detection capabilities. Finally, we discuss the potential for reapplying this adaptive strategy to astrophysical time series, aiming to improve the identification of rare or unexpected behaviors in future studies.
来自现代天文调查的大规模时间序列数据集越来越多,比如盖亚提供的数据,以及即将由维拉·鲁宾天文台的西蒙尼巡天望远镜(Simonyi Survey Telescope)进行的遗产时空调查(LSST),正在改变时域天体物理学,使系统研究数十亿来源的可变和瞬变现象成为可能。然而,这些数据的绝对数量和异质性对传统的分析技术提出了重大挑战。基于特征的表示已经成为一种强大的解决方案,允许应用机器学习方法对天体物理源进行有效的表征和分类,包括可靠地识别活动星系核(agn)。在这项工作中,我们引入了一种用于时间序列数据异常检测的通用方法,该方法将该特征工程框架转移到金融领域,其中时间序列显示出复杂性,例如噪声和不规则模式,与天体物理学中的异常检测非常相似。通过将领域信息特征与无监督算法(特别是隔离森林和自动编码器)相结合,我们的方法有效地检测了相对于所研究样本的异常时间序列,展示了强大的性能并突出了其跨域可转移性。此外,我们提出了一种基于自适应时间窗口的扩展,以更精细的时间分辨率定位异常,进一步增强检测能力。最后,我们讨论了将这种自适应策略重新应用于天体物理时间序列的潜力,旨在提高未来研究中对罕见或意外行为的识别。
{"title":"Hunting the outliers: Machine learning for anomalous time series detection","authors":"N. De Bonis ,&nbsp;S. Vaccaro ,&nbsp;Y. Maruccia ,&nbsp;G. Riccio ,&nbsp;R. Crupi ,&nbsp;S. Rubini ,&nbsp;D. De Cicco ,&nbsp;M. Brescia ,&nbsp;S. Cavuoti","doi":"10.1016/j.ascom.2025.101049","DOIUrl":"10.1016/j.ascom.2025.101049","url":null,"abstract":"<div><div>The increasing availability of large-scale time series datasets from modern astronomical surveys, such as those provided by Gaia and the forthcoming Legacy Survey of Space and Time (LSST) to be conducted with the Simonyi Survey Telescope at the Vera C. Rubin Observatory, is transforming time-domain astrophysics, enabling the systematic study of variable and transient phenomena across billions of sources. However, the sheer volume and heterogeneity of these data present significant challenges for traditional analysis techniques. Feature-based representations have emerged as a powerful solution, allowing the application of machine learning methods for efficient characterization and classification of astrophysical sources, including the reliable identification of Active Galactic Nuclei (AGNs). In this work, we introduce a general-purpose methodology for anomaly detection in time series data that transfers this feature engineering framework to the financial domain, where time series display complexities, such as noise and irregular patterns, closely resembling those in astrophysics. By combining domain-informed features with unsupervised algorithms (specifically Isolation Forests and Autoencoders), our approach effectively detects anomalous time series relative to the sample studied, demonstrating strong performance and highlighting its cross-domain transferability. Moreover, we propose an extension based on adaptive temporal windows to localize anomalies at a finer temporal resolution, further enhancing detection capabilities. Finally, we discuss the potential for reapplying this adaptive strategy to astrophysical time series, aiming to improve the identification of rare or unexpected behaviors in future studies.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101049"},"PeriodicalIF":1.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polarization based direction of arrival estimation using a radio interferometric array 基于偏振的无线电干涉阵列到达方向估计
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-22 DOI: 10.1016/j.ascom.2025.101052
Sarod Yatawatta
Direction of arrival (DOA) estimation is mostly performed using specialized arrays that have carefully designed receiver spacing and layouts to match the operating frequency range. In contrast, radio interferometric arrays are designed to optimally sample the Fourier space data for making high quality images of the sky. Therefore, using existing radio interferometric arrays (with arbitrary geometry and wide frequency variation) for DOA estimation is practically infeasible except by using images made by such interferometers. In this paper, we focus on low cost DOA estimation without imaging, using a subset of a radio interferometric array, using a fraction of the data collected by the full array, and, enabling early determination of DOAs. The proposed method is suitable for transient and low duty cycle source detection. Moreover, the proposed method is an ideal follow-up step to online radio frequency interference (RFI) mitigation, enabling the early estimation of the DOA of the detected RFI.
到达方向(DOA)估计主要使用专门的阵列来执行,这些阵列精心设计了接收器间距和布局,以匹配工作频率范围。相比之下,射电干涉阵列被设计为对傅里叶空间数据进行最佳采样,以获得高质量的天空图像。因此,使用现有的无线电干涉阵列(具有任意几何形状和宽频率变化)进行DOA估计实际上是不可行的,除非使用这种干涉仪产生的图像。在本文中,我们将重点放在无需成像的低成本DOA估计上,使用无线电干涉阵列的一个子集,使用全阵列收集的一小部分数据,并且能够早期确定DOA。该方法适用于瞬态和低占空比源的检测。此外,该方法是在线射频干扰(RFI)缓解的理想后续步骤,能够早期估计检测到的RFI的DOA。
{"title":"Polarization based direction of arrival estimation using a radio interferometric array","authors":"Sarod Yatawatta","doi":"10.1016/j.ascom.2025.101052","DOIUrl":"10.1016/j.ascom.2025.101052","url":null,"abstract":"<div><div>Direction of arrival (DOA) estimation is mostly performed using specialized arrays that have carefully designed receiver spacing and layouts to match the operating frequency range. In contrast, radio interferometric arrays are designed to optimally sample the Fourier space data for making high quality images of the sky. Therefore, using existing radio interferometric arrays (with arbitrary geometry and wide frequency variation) for DOA estimation is practically infeasible except by using images made by such interferometers. In this paper, we focus on low cost DOA estimation without imaging, using a subset of a radio interferometric array, using a fraction of the data collected by the full array, and, enabling early determination of DOAs. The proposed method is suitable for transient and low duty cycle source detection. Moreover, the proposed method is an ideal follow-up step to online radio frequency interference (RFI) mitigation, enabling the early estimation of the DOA of the detected RFI.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101052"},"PeriodicalIF":1.8,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArXSP: A python-based modular application for the reduction of digitized archival spectra ArXSP:一个基于python的模块化应用程序,用于减少数字化档案光谱
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-18 DOI: 10.1016/j.ascom.2025.101050
I.M. Izmailova , A.Zh. Umirbayeva , M.K. Khassanov , L. Aktay , S.A. Shomshekova
We present a methodology for the reduction of archival spectral data together with the description of a newly developed Python-based software package featuring an interactive graphical interface. The work is primarily aimed at processing spectra obtained with electron–optical converters (EOCs), which are characterized by geometric distortions induced by the magnetic field of the registration system. Such data are preserved, in particular, in the archive of the Fesenkov Astrophysical Institute (FAI), which contains about 10,000 photographic plates. These distortions, along with the need to transform the optical density of the photographic material into relative intensity, cannot be corrected by standard astronomical packages such as IRAF and therefore require a dedicated approach. Historically, reductions at FAI were performed using a program written in the Microsoft QuickC language for computing platforms of the 1990s, rendering it incompatible with modern operating systems. The new package is implemented with the PyQt5 framework, retaining the logic of the original code while extending its functionality. The implemented algorithms include image rotation and cropping, geometric distortion correction, construction of the characteristic curve linking optical density and intensity, and direct conversion of pixel values in object spectra. The developed software ensures reproducible reduction of archival spectra and provides a cross-platform environment with potential for further extensions.
我们提出了一种减少档案光谱数据的方法,以及对新开发的基于python的软件包的描述,该软件包具有交互式图形界面。这项工作主要是为了处理由电子光转换器(EOCs)获得的光谱,这些光谱的特征是由配准系统的磁场引起的几何畸变。这些数据特别保存在费森科夫天体物理研究所(FAI)的档案中,其中包含大约10,000张摄影底片。这些畸变以及将照相材料的光学密度转换为相对强度的需要,不能通过诸如IRAF之类的标准天文包来纠正,因此需要专门的方法。从历史上看,FAI的减少是使用用微软QuickC语言为20世纪90年代的计算平台编写的程序来执行的,这使得它与现代操作系统不兼容。新包使用PyQt5框架实现,在扩展其功能的同时保留了原始代码的逻辑。实现的算法包括图像旋转与裁剪、几何畸变校正、光密度与光强之间特征曲线的构建以及物光谱中像素值的直接转换。开发的软件确保了档案光谱的可重复性减少,并提供了具有进一步扩展潜力的跨平台环境。
{"title":"ArXSP: A python-based modular application for the reduction of digitized archival spectra","authors":"I.M. Izmailova ,&nbsp;A.Zh. Umirbayeva ,&nbsp;M.K. Khassanov ,&nbsp;L. Aktay ,&nbsp;S.A. Shomshekova","doi":"10.1016/j.ascom.2025.101050","DOIUrl":"10.1016/j.ascom.2025.101050","url":null,"abstract":"<div><div>We present a methodology for the reduction of archival spectral data together with the description of a newly developed Python-based software package featuring an interactive graphical interface. The work is primarily aimed at processing spectra obtained with electron–optical converters (EOCs), which are characterized by geometric distortions induced by the magnetic field of the registration system. Such data are preserved, in particular, in the archive of the Fesenkov Astrophysical Institute (FAI), which contains about 10,000 photographic plates. These distortions, along with the need to transform the optical density of the photographic material into relative intensity, cannot be corrected by standard astronomical packages such as <span>IRAF</span> and therefore require a dedicated approach. Historically, reductions at FAI were performed using a program written in the <span>Microsoft QuickC</span> language for computing platforms of the 1990s, rendering it incompatible with modern operating systems. The new package is implemented with the <span>PyQt5</span> framework, retaining the logic of the original code while extending its functionality. The implemented algorithms include image rotation and cropping, geometric distortion correction, construction of the characteristic curve linking optical density and intensity, and direct conversion of pixel values in object spectra. The developed software ensures reproducible reduction of archival spectra and provides a cross-platform environment with potential for further extensions.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101050"},"PeriodicalIF":1.8,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPAN: A cross-platform Python GUI software for optical and near-infrared spectral analysis 用于光学和近红外光谱分析的跨平台Python GUI软件
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-18 DOI: 10.1016/j.ascom.2025.101051
D. Gasparri , L. Morelli , U. Battino , J. Méndez-Abreu , A. de Lorenzo-Cáceres
The increasing availability of high-quality optical and near-infrared spectroscopic data, as well as advances in modelling techniques, have greatly expanded the scientific potential of spectroscopic studies. However, the software tools needed to exploit this potential often remain fragmented across multiple specialised packages, requiring scripting skills and manual integration to handle complex workflows. In this paper we present SPAN (SPectral ANalysis), a cross-platform, Python-based Graphical User Interface (GUI) software that integrates the essential steps of the modern spectroscopic workflow within a single, user-friendly environment. SPAN provides a coherent framework that unifies data preparation, spectral processing, and analysis tasks, using the pPXF software as its core engine for full spectral fitting. SPAN allows users to extract 1D spectra from FITS images and datacubes, perform spectral processing (e.g. Doppler correction, continuum modelling, denoising), and carry out detailed analyses, including equivalent width measurements, stellar and gas kinematics, and stellar population studies. It runs natively on Windows, Linux, macOS, and Android, and is fully task-driven, requiring no prior coding experience. We validate SPAN by comparing its output with existing pipelines and literature studies. By offering a flexible, accessible, and well integrated environment, SPAN simplifies and accelerates the spectral analysis workflow, while maintaining scientific accuracy.
高质量的光学和近红外光谱数据的不断增加,以及建模技术的进步,极大地扩展了光谱研究的科学潜力。然而,开发这种潜力所需的软件工具通常仍然分散在多个专门的软件包中,需要脚本技能和手动集成来处理复杂的工作流。在本文中,我们介绍了SPAN(光谱分析),一个跨平台的,基于python的图形用户界面(GUI)软件,它集成了现代光谱工作流程的基本步骤,在一个单一的,用户友好的环境中。SPAN提供了一个统一的框架,将数据准备、光谱处理和分析任务统一起来,使用pPXF软件作为其全光谱拟合的核心引擎。SPAN允许用户从FITS图像和数据库中提取1D光谱,进行光谱处理(例如多普勒校正,连续统建模,去噪),并进行详细分析,包括等效宽度测量,恒星和气体运动学以及恒星群研究。它本机运行在Windows、Linux、macOS和Android上,完全是任务驱动的,不需要之前的编码经验。我们通过将SPAN的输出与现有管道和文献研究进行比较来验证SPAN。通过提供灵活、易于访问和良好集成的环境,SPAN简化并加速了光谱分析工作流程,同时保持了科学的准确性。
{"title":"SPAN: A cross-platform Python GUI software for optical and near-infrared spectral analysis","authors":"D. Gasparri ,&nbsp;L. Morelli ,&nbsp;U. Battino ,&nbsp;J. Méndez-Abreu ,&nbsp;A. de Lorenzo-Cáceres","doi":"10.1016/j.ascom.2025.101051","DOIUrl":"10.1016/j.ascom.2025.101051","url":null,"abstract":"<div><div>The increasing availability of high-quality optical and near-infrared spectroscopic data, as well as advances in modelling techniques, have greatly expanded the scientific potential of spectroscopic studies. However, the software tools needed to exploit this potential often remain fragmented across multiple specialised packages, requiring scripting skills and manual integration to handle complex workflows. In this paper we present <span>SPAN</span> (SPectral ANalysis), a cross-platform, Python-based Graphical User Interface (GUI) software that integrates the essential steps of the modern spectroscopic workflow within a single, user-friendly environment. SPAN provides a coherent framework that unifies data preparation, spectral processing, and analysis tasks, using the pPXF software as its core engine for full spectral fitting. SPAN allows users to extract 1D spectra from FITS images and datacubes, perform spectral processing (e.g. Doppler correction, continuum modelling, denoising), and carry out detailed analyses, including equivalent width measurements, stellar and gas kinematics, and stellar population studies. It runs natively on Windows, Linux, macOS, and Android, and is fully task-driven, requiring no prior coding experience. We validate SPAN by comparing its output with existing pipelines and literature studies. By offering a flexible, accessible, and well integrated environment, SPAN simplifies and accelerates the spectral analysis workflow, while maintaining scientific accuracy.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101051"},"PeriodicalIF":1.8,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Standard candle based distance estimation with learning algorithms 基于学习算法的标准蜡烛距离估计
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-17 DOI: 10.1016/j.ascom.2025.101044
Virginia Ajani , Martina Giovalli , Paolo Viviani , Beatrice Bucciarelli , Sibilla Perina , Deborah Busonero , Andrea Lessio , Vanina Fissore , Olivier Terzo
Measuring distances to celestial objects, such as stars and galaxies, is essential to characterizing their physical properties, formation, and evolution, and provides fundamental constraints on the expansion rate of the Universe. In this work, we present a methodological study comparing several machine learning and deep learning approaches for predicting astrophysical parameters — such as parallax, astrometry-based luminosity, and distance — using Cepheids and RR Lyrae samples from Gaia DR3 catalogues as input. In parallel, we introduce a framework to exploit the historical archive of photographic plates from INAF-OATo. In this context we extract a catalogue of objects from the plates, then cross-match the output sources with the Gaia dataset, with the goal of extending light curves that could serve as additional input for the models. Preliminary results identify the Gaussian Process regressor as the best-performing model among those tested, and the Multi-Layer Perceptron (MLP) as the most promising deep learning approach. We further study the propagation of uncertainties, enabling us to incorporate them both into the models and the predictions. For the plate analysis, we chose an image of the LMC field with 43 cepheids in common with the Gaia catalogue as a first case study to validate our methodology.
测量到天体(如恒星和星系)的距离,对于描述它们的物理特性、形成和演化是必不可少的,并为宇宙的膨胀速度提供了基本的约束。在这项工作中,我们提出了一项方法研究,比较了几种机器学习和深度学习方法,用于预测天体物理参数——如视差、基于天体测量的光度和距离——使用来自盖亚DR3目录的造父变星和天琴座RR星样本作为输入。同时,我们也介绍了一个利用INAF-OATo摄影底片历史档案的框架。在这种情况下,我们从板中提取对象目录,然后将输出源与Gaia数据集交叉匹配,目标是扩展可以作为模型额外输入的光曲线。初步结果确定高斯过程回归器是测试中表现最好的模型,多层感知器(MLP)是最有前途的深度学习方法。我们进一步研究不确定性的传播,使我们能够将它们合并到模型和预测中。对于平板分析,我们选择了一张与盖亚星表共有43颗造父变星的LMC场图像作为第一个案例研究来验证我们的方法。
{"title":"Standard candle based distance estimation with learning algorithms","authors":"Virginia Ajani ,&nbsp;Martina Giovalli ,&nbsp;Paolo Viviani ,&nbsp;Beatrice Bucciarelli ,&nbsp;Sibilla Perina ,&nbsp;Deborah Busonero ,&nbsp;Andrea Lessio ,&nbsp;Vanina Fissore ,&nbsp;Olivier Terzo","doi":"10.1016/j.ascom.2025.101044","DOIUrl":"10.1016/j.ascom.2025.101044","url":null,"abstract":"<div><div>Measuring distances to celestial objects, such as stars and galaxies, is essential to characterizing their physical properties, formation, and evolution, and provides fundamental constraints on the expansion rate of the Universe. In this work, we present a methodological study comparing several machine learning and deep learning approaches for predicting astrophysical parameters — such as parallax, astrometry-based luminosity, and distance — using Cepheids and RR Lyrae samples from Gaia DR3 catalogues as input. In parallel, we introduce a framework to exploit the historical archive of photographic plates from INAF-OATo. In this context we extract a catalogue of objects from the plates, then cross-match the output sources with the Gaia dataset, with the goal of extending light curves that could serve as additional input for the models. Preliminary results identify the Gaussian Process regressor as the best-performing model among those tested, and the Multi-Layer Perceptron (MLP) as the most promising deep learning approach. We further study the propagation of uncertainties, enabling us to incorporate them both into the models and the predictions. For the plate analysis, we chose an image of the LMC field with 43 cepheids in common with the Gaia catalogue as a first case study to validate our methodology.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101044"},"PeriodicalIF":1.8,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Search for the best diagnostics in globular clusters (MRSES Approach) 寻找球状星团的最佳诊断方法(MRSES方法)
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-13 DOI: 10.1016/j.ascom.2025.101047
A. Chilingarian
High-dimensional classification and feature extraction present significant challenges in analyzing astrophysical data. This paper describes the implementation of the Multiple Random Search with Early Stop (MRSES) algorithm to detect weak, structured signals within high-dimensional noise. We adapt MRSES for use in stellar population studies by applying it to APOGEE-derived elemental abundance data from the globular clusters M13 and M3.
The MRSES method uses stochastic subset evaluation guided by the Bhattacharyya distance to rank features by their contribution to class separability. In globular clusters, this approach enables the recovery of chemically distinct subpopulations without assuming linearity or relying on marginal statistics. In M13, MRSES identifies classic second-generation markers, such as [Al/Fe] and [Na/Fe]. Meanwhile, in M3, it detects more subtle variations driven by iron-group elements, highlighting its sensitivity to weak internal differences.
Benchmark tests on synthetic Gaussian datasets verify the method’s robustness under different dimensionalities and correlation structures. MRSES avoids classical overfitting by using stochastic sampling instead of parametric fitting, and the Bhattacharyya distance threshold reflects an empirically calibrated noise boundary rather than an arbitrary parameter.
在天体物理数据分析中,高维分类和特征提取是一个重大挑战。本文描述了在高维噪声中检测弱结构化信号的多重随机搜索早停(MRSES)算法的实现。我们通过将MRSES应用于apogee从球状星团M13和M3中获得的元素丰度数据,使其适用于恒星种群研究。MRSES方法采用Bhattacharyya距离指导下的随机子集评估,根据特征对类可分性的贡献对特征进行排序。在球状星团中,这种方法可以在不假设线性或依赖于边际统计的情况下恢复化学上不同的亚种群。在M13中,MRSES识别经典的第二代标记,如[Al/Fe]和[Na/Fe]。同时,在M3中,它检测到由铁族元素驱动的更微妙的变化,突出了它对微弱内部差异的敏感性。在合成高斯数据集上的基准测试验证了该方法在不同维数和相关结构下的鲁棒性。MRSES通过使用随机抽样而不是参数拟合来避免经典过拟合,并且Bhattacharyya距离阈值反映的是经验校准的噪声边界而不是任意参数。
{"title":"Search for the best diagnostics in globular clusters (MRSES Approach)","authors":"A. Chilingarian","doi":"10.1016/j.ascom.2025.101047","DOIUrl":"10.1016/j.ascom.2025.101047","url":null,"abstract":"<div><div>High-dimensional classification and feature extraction present significant challenges in analyzing astrophysical data. This paper describes the implementation of the Multiple Random Search with Early Stop (MRSES) algorithm to detect weak, structured signals within high-dimensional noise. We adapt MRSES for use in stellar population studies by applying it to APOGEE-derived elemental abundance data from the globular clusters M13 and M3.</div><div>The MRSES method uses stochastic subset evaluation guided by the Bhattacharyya distance to rank features by their contribution to class separability. In globular clusters, this approach enables the recovery of chemically distinct subpopulations without assuming linearity or relying on marginal statistics. In M13, MRSES identifies classic second-generation markers, such as [Al/Fe] and [Na/Fe]. Meanwhile, in M3, it detects more subtle variations driven by iron-group elements, highlighting its sensitivity to weak internal differences.</div><div>Benchmark tests on synthetic Gaussian datasets verify the method’s robustness under different dimensionalities and correlation structures. MRSES avoids classical overfitting by using stochastic sampling instead of parametric fitting, and the Bhattacharyya distance threshold reflects an empirically calibrated noise boundary rather than an arbitrary parameter.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101047"},"PeriodicalIF":1.8,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BOPAS: The Bologna Observatory Pipeline for Astrometry of Satellites BOPAS:博洛尼亚天文台卫星天体测量管道
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-13 DOI: 10.1016/j.ascom.2025.101045
S. Palmiotto , A. Carbognani , A. Buzzoni , D. Modenini , P. Tortora
Facing the increasing population of active and inactive objects along the different geocentric orbital regimes, the activities of Space Surveillance and Tracking are becoming more and more important for any efficient Space Traffic Management and Policy effort in order to quantify and (when possible) mitigate the collision risk with/among any space debris. However, the astrometric observations of resident space objects, including space debris, are not easy: due to their high angular speed, to get a good orbit solution, a temporal precision of the order of a few milliseconds and the ability to measure the position of streaks are required. These observational difficulties are similar to those encountered in the astrometry of very fast near-Earth asteroids as they pass closer to Earth. We developed our own image processing pipeline for astrometry of satellite streaks, and tested it with observations of resident space objects and near-Earth asteroids from our telescope asset for Space Surveillance and Tracking, obtaining astrometric errors in the order of 1 arcsec. We propose our pipeline as a valid tool among the other ones in the literature. The software presented in this paper could also be useful to process observations of fast NEAs, and is freely available on GitHub, where anyone can download and adapt it as needed.
面对不同地心轨道上不断增加的活动和非活动物体,空间监视和跟踪活动对于任何有效的空间交通管理和政策工作变得越来越重要,以便量化并(在可能的情况下)减轻与任何空间碎片/空间碎片之间的碰撞风险。然而,对包括空间碎片在内的驻留空间物体进行天文测量观测并不容易:由于它们的角速度很高,为了获得良好的轨道解,需要几毫秒的时间精度和测量条纹位置的能力。这些观测上的困难类似于在观测速度非常快的近地小行星靠近地球时遇到的天文测量问题。我们开发了自己的卫星条纹天体测量图像处理管道,并通过我们的空间监视和跟踪望远镜资产对驻留空间物体和近地小行星的观测进行了测试,获得了1弧秒量级的天体测量误差。我们建议我们的管道作为文献中其他有效工具之一。本文中介绍的软件也可以用于处理快速nea的观测,并且可以在GitHub上免费获得,任何人都可以根据需要下载和调整它。
{"title":"BOPAS: The Bologna Observatory Pipeline for Astrometry of Satellites","authors":"S. Palmiotto ,&nbsp;A. Carbognani ,&nbsp;A. Buzzoni ,&nbsp;D. Modenini ,&nbsp;P. Tortora","doi":"10.1016/j.ascom.2025.101045","DOIUrl":"10.1016/j.ascom.2025.101045","url":null,"abstract":"<div><div>Facing the increasing population of active and inactive objects along the different geocentric orbital regimes, the activities of Space Surveillance and Tracking are becoming more and more important for any efficient Space Traffic Management and Policy effort in order to quantify and (when possible) mitigate the collision risk with/among any space debris. However, the astrometric observations of resident space objects, including space debris, are not easy: due to their high angular speed, to get a good orbit solution, a temporal precision of the order of a few milliseconds and the ability to measure the position of streaks are required. These observational difficulties are similar to those encountered in the astrometry of very fast near-Earth asteroids as they pass closer to Earth. We developed our own image processing pipeline for astrometry of satellite streaks, and tested it with observations of resident space objects and near-Earth asteroids from our telescope asset for Space Surveillance and Tracking, obtaining astrometric errors in the order of 1 arcsec. We propose our pipeline as a valid tool among the other ones in the literature. The software presented in this paper could also be useful to process observations of fast NEAs, and is freely available on GitHub, where anyone can download and adapt it as needed.</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101045"},"PeriodicalIF":1.8,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of a choice of cross-match radius on Gaia sky 盖亚天空交叉匹配半径选择的优化
IF 1.8 4区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS Pub Date : 2025-12-12 DOI: 10.1016/j.ascom.2025.101028
Dana Kovaleva , Pavel Kaygorodov , Ekaterina Malik , Oleg Malkov
The data obtained by the Gaia space mission (Gaia Collaboration et al., 2016) provide a recent and notable example of a dataset that needs to be cross-matched with other datasets for various astronomical applications. We investigate how the properties of cross-matched datasets affect the results, in order to determine optimal matching parameters for each scientific task. We employ Gaia DR3 main catalogue and synthetically generated datasets to perform cross-match and obtain numerical metrics to predict probability of mismatch. This probability depends on the matching radius, the positional accuracy of the matched dataset, the surface density in the vicinity, and the stellar magnitude of the source. Gaia DR3 main catalogue was probed for the metrics of distribution to the nearest neighbour as a function of sky position. We employed 768 test areas of 1 degree radius distributed uniformly over the sky and obtained for each area mean, median and Q10 angular distance to the nearest neighbour. We found that the fraction of true positives decreases sharply when the ratio of positional accuracy to the characteristic nearest-neighbour distance exceeds 0.2. Simultaneously, while positional accuracy of the matched datasets approaches their characteristic distance, the probability of true positive and false positive outcomes are similar. It was demonstrated that employing “best neighbour” condition one may decrease the fraction of mismatches up to an order of magnitude, especially in the populated regions of sky and at larger positional errors of matched datasets. Using stellar magnitudes to constrain positional cross-matches is ineffective for faint sources. We calculated the expected fraction of mismatches as a function of sky position for cross-matches between Gaia and synthetic catalogues with positional uncertainties comparable to those of 2MASS and eRASS. In the most crowded sky regions (the Galactic Centre and disk), mismatches reach 15% for the nearest-neighbour and 4% for the best-neighbour criterion under 2MASS-level accuracy. For eRASS-level accuracy, they rise to 91% and 12%, respectively.
We provide metrics of distribution of distance to the nearest neighbour in Gaia DR3 main catalogue depending on sky coordinates in analytical and numerical form, as well as in the form of Python module gaia_density (https://github.com/noncath/gaia_density).
盖亚太空任务(Gaia Collaboration et al., 2016)获得的数据提供了一个最新且值得注意的数据集示例,该数据集需要与各种天文应用的其他数据集交叉匹配。我们研究交叉匹配数据集的属性如何影响结果,以便为每个科学任务确定最佳匹配参数。我们使用Gaia DR3主目录和综合生成的数据集进行交叉匹配,并获得数值度量来预测不匹配概率。这个概率取决于匹配半径、匹配数据集的位置精度、附近的表面密度和源的恒星星等。对Gaia DR3主星表进行了以天空位置为函数的最近邻分布度量。我们采用768个均匀分布在天空上的1度半径的测试区域,并获得每个区域到最近邻居的平均,中位数和Q10角距离。我们发现,当定位精度与特征最近邻距离之比超过0.2时,真阳性的比例急剧下降。同时,虽然匹配数据集的位置精度接近其特征距离,但真阳性和假阳性结果的概率相似。结果表明,采用“最佳邻居”条件可以将不匹配的比例降低一个数量级,特别是在天空人口稠密的区域和匹配数据集的位置误差较大的情况下。使用恒星星等来约束位置交叉匹配对于微弱的光源是无效的。我们计算了Gaia和合成星表之间交叉匹配的预期不匹配分数作为天空位置的函数,其位置不确定性与2MASS和eRASS相当。在最拥挤的天空区域(银河中心和星系盘),在2质量级别的精度下,最近邻的不匹配达到15%,最近邻的不匹配达到4%。对于erass级别的准确性,它们分别上升到91%和12%。我们根据天空坐标,以解析和数值形式以及Python模块gaia_density (https://github.com/noncath/gaia_density)的形式提供了到Gaia DR3主星表中最近邻居的距离分布度量。
{"title":"Optimization of a choice of cross-match radius on Gaia sky","authors":"Dana Kovaleva ,&nbsp;Pavel Kaygorodov ,&nbsp;Ekaterina Malik ,&nbsp;Oleg Malkov","doi":"10.1016/j.ascom.2025.101028","DOIUrl":"10.1016/j.ascom.2025.101028","url":null,"abstract":"<div><div>The data obtained by the Gaia space mission (Gaia Collaboration et al., 2016) provide a recent and notable example of a dataset that needs to be cross-matched with other datasets for various astronomical applications. We investigate how the properties of cross-matched datasets affect the results, in order to determine optimal matching parameters for each scientific task. We employ Gaia DR3 main catalogue and synthetically generated datasets to perform cross-match and obtain numerical metrics to predict probability of mismatch. This probability depends on the matching radius, the positional accuracy of the matched dataset, the surface density in the vicinity, and the stellar magnitude of the source. Gaia DR3 main catalogue was probed for the metrics of distribution to the nearest neighbour as a function of sky position. We employed 768 test areas of 1 degree radius distributed uniformly over the sky and obtained for each area mean, median and Q10 angular distance to the nearest neighbour. We found that the fraction of true positives decreases sharply when the ratio of positional accuracy to the characteristic nearest-neighbour distance exceeds 0.2. Simultaneously, while positional accuracy of the matched datasets approaches their characteristic distance, the probability of true positive and false positive outcomes are similar. It was demonstrated that employing “best neighbour” condition one may decrease the fraction of mismatches up to an order of magnitude, especially in the populated regions of sky and at larger positional errors of matched datasets. Using stellar magnitudes to constrain positional cross-matches is ineffective for faint sources. We calculated the expected fraction of mismatches as a function of sky position for cross-matches between Gaia and synthetic catalogues with positional uncertainties comparable to those of 2MASS and eRASS. In the most crowded sky regions (the Galactic Centre and disk), mismatches reach 15% for the nearest-neighbour and 4% for the best-neighbour criterion under 2MASS-level accuracy. For eRASS-level accuracy, they rise to 91% and 12%, respectively.</div><div>We provide metrics of distribution of distance to the nearest neighbour in Gaia DR3 main catalogue depending on sky coordinates in analytical and numerical form, as well as in the form of Python module gaia_density (<span><span>https://github.com/noncath/gaia_density</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":48757,"journal":{"name":"Astronomy and Computing","volume":"55 ","pages":"Article 101028"},"PeriodicalIF":1.8,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Astronomy and Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1