Pub Date : 2024-07-02DOI: 10.1051/epjconf/202429609016
Krista Smith
The PHENIX experiment at RHIC has a unique large rapidity coverage (1.2 < |y| < 2.2) for heavy flavor studies in heavy-ion collisions. This kinematic region has a smaller particle density and may undergo different nuclear effects before and after the hard process when compared to mid-rapidity production. The latest PHENIX runs contain a large data set which allows, for the first time, the study of heavy flavor and J/ψ flow at the large rapidity region in Au+Au collisions at √SNN =200 GeV. This measurement has the potential to reveal a medium evolution distinct from that known at mid-rapidity.
{"title":"Heavy flavor and quarkonia results from the PHENIX experiment","authors":"Krista Smith","doi":"10.1051/epjconf/202429609016","DOIUrl":"https://doi.org/10.1051/epjconf/202429609016","url":null,"abstract":"The PHENIX experiment at RHIC has a unique large rapidity coverage (1.2 < |y| < 2.2) for heavy flavor studies in heavy-ion collisions. This kinematic region has a smaller particle density and may undergo different nuclear effects before and after the hard process when compared to mid-rapidity production. The latest PHENIX runs contain a large data set which allows, for the first time, the study of heavy flavor and J/ψ flow at the large rapidity region in Au+Au collisions at √SNN =200 GeV. This measurement has the potential to reveal a medium evolution distinct from that known at mid-rapidity.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"27 28","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141685581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1051/epjconf/202429701007
Andreas Korn
I summarize the stellar side of the cosmological lithium problem(s). Evidence from independent studies is accumulating and indicates that stars may very well be fully responsible for lowering their surface lithium from the predicted primordial value to observed levels through internal element-transport mechanisms collectively referred to as atomic diffusion. While atomic diffusion can be modelled from first principles, stellar evolution uses a parametrized representation of convection making it impossible to predict convective-boundary mixing as a vital stellar process moderating atomic diffusion. More work is clearly needed here for a fully quantitative picture of lithium (and metallicity) evolution as stars age. Lastly, note that inferred stellar lithium-6 abundances have all but disappeared.
{"title":"The ups and downs of inferred cosmological lithium","authors":"Andreas Korn","doi":"10.1051/epjconf/202429701007","DOIUrl":"https://doi.org/10.1051/epjconf/202429701007","url":null,"abstract":"I summarize the stellar side of the cosmological lithium problem(s). Evidence from independent studies is accumulating and indicates that stars may very well be fully responsible for lowering their surface lithium from the predicted primordial value to observed levels through internal element-transport mechanisms collectively referred to as atomic diffusion.\u0000While atomic diffusion can be modelled from first principles, stellar evolution uses a parametrized representation of convection making it impossible to predict convective-boundary mixing as a vital stellar process moderating atomic diffusion. More work is clearly needed here for a fully quantitative picture of lithium (and metallicity) evolution as stars age.\u0000Lastly, note that inferred stellar lithium-6 abundances have all but disappeared.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"100 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141342299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.1051/epjconf/202429504003
A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem, Frank Wurthwein
The CMS Submission Infrastructure (SI) is the main computing resource provisioning system for CMS workloads. A number of HTCondor pools are employed to manage this infrastructure, which aggregates geographically distributed resources from the WLCG and other providers. Historically, the model of authentication among the diverse components of this infrastructure has relied on the Grid Security Infrastructure (GSI), based on identities and X509 certificates. In contrast, commonly used modern authentication standards are based on capabilities and tokens. The WLCG has identified this trend and aims at a transparent replacement of GSI for all its workload management, data transfer and storage access operations, to be completed during the current LHC Run 3. As part of this effort, and within the context of CMS computing, the Submission Infrastructure group is in the process of phasing out the GSI part of its authentication layers, in favor of IDTokens and Scitokens. The use of tokens is already well integrated into the HTCondor Software Suite, which has allowed us to fully migrate the authentication between internal components of SI. Additionally, recent versions of the HTCondor-CE support tokens as well, enabling CMS resource requests to Grid sites employing this CE technology to be granted by means of token exchange. After a rollout campaign to sites, successfully completed by the third quarter of 2022, the totality of HTCondor CEs in use by CMS are already receiving Scitoken-based pilot jobs. On the ARC CE side, a parallel campaign was launched to foster the adoption of the REST interface at CMS sites (required to enable token-based job submission via HTCondor-G), which is nearing completion as well. In this contribution, the newly adopted authentication model will be described. We will then report on the migration status and final steps towards complete GSI phase out in the CMS SI.
{"title":"Adoption of a token-based authentication model for the CMS Submission Infrastructure","authors":"A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem, Frank Wurthwein","doi":"10.1051/epjconf/202429504003","DOIUrl":"https://doi.org/10.1051/epjconf/202429504003","url":null,"abstract":"The CMS Submission Infrastructure (SI) is the main computing resource provisioning system for CMS workloads. A number of HTCondor pools are employed to manage this infrastructure, which aggregates geographically distributed resources from the WLCG and other providers. Historically, the model of authentication among the diverse components of this infrastructure has relied on the Grid Security Infrastructure (GSI), based on identities and X509 certificates. In contrast, commonly used modern authentication standards are based on capabilities and tokens. The WLCG has identified this trend and aims at a transparent replacement of GSI for all its workload management, data transfer and storage access operations, to be completed during the current LHC Run 3. As part of this effort, and within the context of CMS computing, the Submission Infrastructure group is in the process of phasing out the GSI part of its authentication layers, in favor of IDTokens and Scitokens. The use of tokens is already well integrated into the HTCondor Software Suite, which has allowed us to fully migrate the authentication between internal components of SI. Additionally, recent versions of the HTCondor-CE support tokens as well, enabling CMS resource requests to Grid sites employing this CE technology to be granted by means of token exchange. After a rollout campaign to sites, successfully completed by the third quarter of 2022, the totality of HTCondor CEs in use by CMS are already receiving Scitoken-based pilot jobs. On the ARC CE side, a parallel campaign was launched to foster the adoption of the REST interface at CMS sites (required to enable token-based job submission via HTCondor-G), which is nearing completion as well. In this contribution, the newly adopted authentication model will be described. We will then report on the migration status and final steps towards complete GSI phase out in the CMS SI.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141106376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.1051/epjconf/202429504046
A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem
While the computing landscape supporting LHC experiments is currently dominated by x86 processors at WLCG sites, this configuration will evolve in the coming years. LHC collaborations will be increasingly employing HPC and Cloud facilities to process the vast amounts of data expected during the LHC Run 3 and the future HL-LHC phase. These facilities often feature diverse compute resources, including alternative CPU architectures like ARM and IBM Power, as well as a variety of GPU specifications. Using these heterogeneous resources efficiently is thus essential for the LHC collaborations reaching their future scientific goals. The Submission Infrastructure (SI) is a central element in CMS Computing, enabling resource acquisition and exploitation by CMS data processing, simulation and analysis tasks. The SI must therefore be adapted to ensure access and optimal utilization of this heterogeneous compute capacity. Some steps in this evolution have been already taken, as CMS is currently using opportunistically a small pool of GPU slots provided mainly at the CMS WLCG sites. Additionally, Power9 processors have been validated for CMS production at the Marconi-100 cluster at CINECA. This note will describe the updated capabilities of the SI to continue ensuring the efficient allocation and use of computing resources by CMS, despite their increasing diversity. The next steps towards a full integration and support of heterogeneous resources according to CMS needs will also be reported.
尽管支持大型强子对撞机实验的计算环境目前主要由WLCG场址的X86处理器主导,但这种配置在未来几年内还将不断演变。大型强子对撞机合作组织将越来越多地采用高性能计算和云计算设施来处理大型强子对撞机运行 3 阶段和未来 HL-LHC 阶段的大量数据。这些设施通常具有多种计算资源,包括 ARM 和 IBM Power 等替代 CPU 架构以及各种 GPU 规格。因此,高效利用这些异构资源对于大型强子对撞机合作组织实现未来的科学目标至关重要。提交基础设施(SI)是 CMS 计算的核心要素,它使 CMS 数据处理、模拟和分析任务能够获取和利用资源。因此,必须对 SI 进行调整,以确保对这种异构计算能力的访问和优化利用。目前,CMS 正在利用主要由 CMS WLCG 站点提供的少量 GPU 插槽。此外,Power9 处理器已经通过了 CINECA 马可尼-100 集群的 CMS 生产验证。本说明将介绍 SI 的最新功能,以继续确保 CMS 有效分配和使用计算资源,尽管这些资源日益多样化。此外,还将报告根据 CMS 需求全面整合和支持异构资源的下一步措施。
{"title":"The integration of heterogeneous resources in the CMS Submission Infrastructure for the LHC Run 3 and beyond","authors":"A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem","doi":"10.1051/epjconf/202429504046","DOIUrl":"https://doi.org/10.1051/epjconf/202429504046","url":null,"abstract":"While the computing landscape supporting LHC experiments is currently dominated by x86 processors at WLCG sites, this configuration will evolve in the coming years. LHC collaborations will be increasingly employing HPC and Cloud facilities to process the vast amounts of data expected during the LHC Run 3 and the future HL-LHC phase. These facilities often feature diverse compute resources, including alternative CPU architectures like ARM and IBM Power, as well as a variety of GPU specifications. Using these heterogeneous resources efficiently is thus essential for the LHC collaborations reaching their future scientific goals. The Submission Infrastructure (SI) is a central element in CMS Computing, enabling resource acquisition and exploitation by CMS data processing, simulation and analysis tasks. The SI must therefore be adapted to ensure access and optimal utilization of this heterogeneous compute capacity. Some steps in this evolution have been already taken, as CMS is currently using opportunistically a small pool of GPU slots provided mainly at the CMS WLCG sites. Additionally, Power9 processors have been validated for CMS production at the Marconi-100 cluster at CINECA. This note will describe the updated capabilities of the SI to continue ensuring the efficient allocation and use of computing resources by CMS, despite their increasing diversity. The next steps towards a full integration and support of heterogeneous resources according to CMS needs will also be reported.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"7 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141107025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.1051/epjconf/202429503036
M. Mascheroni, A. P. Yzquierdo, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem, D. Spiga, C. Wissing, Frank Wurthwein
The former CMS Run 2 High Level Trigger (HLT) farm is one of the largest contributors to CMS compute resources, providing about 25k job slots for offline computing. This CPU farm was initially employed as an opportunistic resource, exploited during inter-fill periods, in the LHC Run 2. Since then, it has become a nearly transparent extension of the CMS capacity at CERN, being located on-site at the LHC interaction point 5 (P5), where the CMS detector is installed. This resource has been configured to support the execution of critical CMS tasks, such as prompt detector data reconstruction. It can therefore be used in combination with the dedicated Tier 0 capacity at CERN, in order to process and absorb peaks in the stream of data coming from the CMS detector. The initial configuration for this resource, based on statically configured VMs, provided the required level of functionality. However, regular operations of this cluster revealed certain limitations compared to the resource provisioning and use model employed in the case of WLCG sites. A new configuration, based on a vacuum-like model, has been implemented for this resource in order to solve the detected shortcomings. This paper reports about this redeployment work on the permanent cloud for an enhanced support to CMS offline computing, comparing the former and new models’ respective functionalities, along with the commissioning effort for the new setup.
{"title":"Repurposing of the Run 2 CMS High Level Trigger Infrastructure as a Cloud Resource for Offline Computing","authors":"M. Mascheroni, A. P. Yzquierdo, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem, D. Spiga, C. Wissing, Frank Wurthwein","doi":"10.1051/epjconf/202429503036","DOIUrl":"https://doi.org/10.1051/epjconf/202429503036","url":null,"abstract":"The former CMS Run 2 High Level Trigger (HLT) farm is one of the largest contributors to CMS compute resources, providing about 25k job slots for offline computing. This CPU farm was initially employed as an opportunistic resource, exploited during inter-fill periods, in the LHC Run 2. Since then, it has become a nearly transparent extension of the CMS capacity at CERN, being located on-site at the LHC interaction point 5 (P5), where the CMS detector is installed. This resource has been configured to support the execution of critical CMS tasks, such as prompt detector data reconstruction. It can therefore be used in combination with the dedicated Tier 0 capacity at CERN, in order to process and absorb peaks in the stream of data coming from the CMS detector. The initial configuration for this resource, based on statically configured VMs, provided the required level of functionality. However, regular operations of this cluster revealed certain limitations compared to the resource provisioning and use model employed in the case of WLCG sites. A new configuration, based on a vacuum-like model, has been implemented for this resource in order to solve the detected shortcomings. This paper reports about this redeployment work on the permanent cloud for an enhanced support to CMS offline computing, comparing the former and new models’ respective functionalities, along with the commissioning effort for the new setup.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"51 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141103068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.1051/epjconf/202429501035
A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem
The computing resource needs of LHC experiments are expected to continue growing significantly during the Run 3 and into the HL-LHC era. The landscape of available resources will also evolve, as High Performance Computing (HPC) and Cloud resources will provide a comparable, or even dominant, fraction of the total compute capacity. The future years present a challenge for the experiments’ resource provisioning models, both in terms of scalability and increasing complexity. The CMS Submission Infrastructure (SI) provisions computing resources for CMS workflows. This infrastructure is built on a set of federated HTCondor pools, currently aggregating 400k CPU cores distributed worldwide and supporting the simultaneous execution of over 200k computing tasks. Incorporating HPC resources into CMS computing represents firstly an integration challenge, as HPC centers are much more diverse compared to Grid sites. Secondly, evolving the present SI, dimensioned to harness the current CMS computing capacity, to reach the resource scales required for the HLLHC phase, while maintaining global flexibility and efficiency, will represent an additional challenge for the SI. To preventively address future potential scalability limits, the SI team regularly runs tests to explore the maximum reach of our infrastructure. In this note, the integration of HPC resources into CMS offline computing is summarized, the potential concerns for the SI derived from the increased scale of operations are described, and the most recent results of scalability test on the CMS SI are reported.
大型强子对撞机实验的计算资源需求预计将在运行3阶段和HL-LHC时代继续大幅增长。随着高性能计算(HPC)和云计算资源在总计算能力中所占比例越来越大,可用资源的格局也将发生变化。未来几年,无论是在可扩展性还是在日益增加的复杂性方面,实验的资源供应模式都将面临挑战。CMS 提交基础设施(SI)为 CMS 工作流提供计算资源。该基础设施建立在一组联合 HTCondor 池的基础上,目前共有 40 万个 CPU 内核,分布在全球各地,支持同时执行 20 多万个计算任务。将高性能计算资源纳入 CMS 计算首先是一项整合挑战,因为与网格站点相比,高性能计算中心更加多样化。其次,在保持全局灵活性和效率的前提下,将目前的 SI(利用当前 CMS 计算能力的尺寸)发展到 HLLHC 阶段所需的资源规模,将是 SI 面临的另一项挑战。为了预防性地解决未来潜在的可扩展性限制,SI 团队定期进行测试,以探索我们基础设施的最大范围。本说明概述了将高性能计算资源整合到 CMS 离线计算中的情况,描述了因运行规模扩大而可能引起的 SI 问题,并报告了 CMS SI 可扩展性测试的最新结果。
{"title":"HPC resources for CMS offline computing: An integration and scalability challenge for the Submission Infrastructure","authors":"A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem","doi":"10.1051/epjconf/202429501035","DOIUrl":"https://doi.org/10.1051/epjconf/202429501035","url":null,"abstract":"The computing resource needs of LHC experiments are expected to continue growing significantly during the Run 3 and into the HL-LHC era. The landscape of available resources will also evolve, as High Performance Computing (HPC) and Cloud resources will provide a comparable, or even dominant, fraction of the total compute capacity. The future years present a challenge for the experiments’ resource provisioning models, both in terms of scalability and increasing complexity. The CMS Submission Infrastructure (SI) provisions computing resources for CMS workflows. This infrastructure is built on a set of federated HTCondor pools, currently aggregating 400k CPU cores distributed worldwide and supporting the simultaneous execution of over 200k computing tasks. Incorporating HPC resources into CMS computing represents firstly an integration challenge, as HPC centers are much more diverse compared to Grid sites. Secondly, evolving the present SI, dimensioned to harness the current CMS computing capacity, to reach the resource scales required for the HLLHC phase, while maintaining global flexibility and efficiency, will represent an additional challenge for the SI. To preventively address future potential scalability limits, the SI team regularly runs tests to explore the maximum reach of our infrastructure. In this note, the integration of HPC resources into CMS offline computing is summarized, the potential concerns for the SI derived from the increased scale of operations are described, and the most recent results of scalability test on the CMS SI are reported.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"38 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141103790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.1051/epjconf/202429511016
K. Meagher, J. Santen
The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. For every observed neutrino event, there are over 106 background events caused by cosmic ray air shower muons. In order to properly separate signal from background, it is necessary to produce Monte Carlo simulations of these air showers. Although to-date, IceCube has produced large quantities of background simulation, these studies still remain statistics limited. The first stage of simulation requires heavy CPU usage while the second stage requires heavy GPU usage. Processing both of these stages on the same node will result in an underutilized GPU but using different nodes will encounter bandwidth bottlenecks. Furthermore, due to the power-law energy spectrum of cosmic rays, the memory footprint of the detector response often exceeded the limit in unpredictable ways. This proceeding presents new client–server code which parallelizes the first stage onto multiple CPUs on the same node and then passes it on to the GPU for photon propagation. This results in GPU utilization of greater than 90% as well as more predictable memory usage and an overall factor of 20 improvement in speed over previous techniques.
{"title":"Parallelizing Air Shower Simulation for Background Characterization in IceCube","authors":"K. Meagher, J. Santen","doi":"10.1051/epjconf/202429511016","DOIUrl":"https://doi.org/10.1051/epjconf/202429511016","url":null,"abstract":"The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. For every observed neutrino event, there are over 106 background events caused by cosmic ray air shower muons. In order to properly separate signal from background, it is necessary to produce Monte Carlo simulations of these air showers. Although to-date, IceCube has produced large quantities of background simulation, these studies still remain statistics limited. The first stage of simulation requires heavy CPU usage while the second stage requires heavy GPU usage. Processing both of these stages on the same node will result in an underutilized GPU but using different nodes will encounter bandwidth bottlenecks. Furthermore, due to the power-law energy spectrum of cosmic rays, the memory footprint of the detector response often exceeded the limit in unpredictable ways. This proceeding presents new client–server code which parallelizes the first stage onto multiple CPUs on the same node and then passes it on to the GPU for photon propagation. This results in GPU utilization of greater than 90% as well as more predictable memory usage and an overall factor of 20 improvement in speed over previous techniques.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":" 14","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1051/epjconf/202429204001
H. Lenske, J. Bellone, M. Colonna, Danilo Gambacurta, José-Antonio Lay
The theory of heavy ion double charge exchange (DCE) reactions A(Z, N) → A(Z ± 2, N ∓ 2) is recapitulated emphasizing the role of Double Single Charge Exchange (DSCE) and pion-nucleon Majorana DCE (MDCE) reactions. DSCE reactions are of second–order distorted wave character, mediated by isovector nucleon-nucleon (NN) interactions. The DSCE response functions resemble the nuclear matrix elements (NME) of 2ν2β decay. The MDCE process proceeds by a dynamically generated effective rank-2 isotensor interaction, defined by off–shell pion–nucleon DCE scattering. In closure approximation pion potentials and two–nucleon correlations are obtained, similar to the neutrino potentials and the intranuclear exchange of Majorana neutrinos in 0ν2β Majorana double beta decay (MDBD).
{"title":"Formal theory of heavy ion double charge exchange reactions","authors":"H. Lenske, J. Bellone, M. Colonna, Danilo Gambacurta, José-Antonio Lay","doi":"10.1051/epjconf/202429204001","DOIUrl":"https://doi.org/10.1051/epjconf/202429204001","url":null,"abstract":"The theory of heavy ion double charge exchange (DCE) reactions A(Z, N) → A(Z ± 2, N ∓ 2) is recapitulated emphasizing the role of Double Single Charge Exchange (DSCE) and pion-nucleon Majorana DCE (MDCE) reactions. DSCE reactions are of second–order distorted wave character, mediated by isovector nucleon-nucleon (NN) interactions. The DSCE response functions resemble the nuclear matrix elements (NME) of 2ν2β decay. The MDCE process proceeds by a dynamically generated effective rank-2 isotensor interaction, defined by off–shell pion–nucleon DCE scattering. In closure approximation pion potentials and two–nucleon correlations are obtained, similar to the neutrino potentials and the intranuclear exchange of Majorana neutrinos in 0ν2β Majorana double beta decay (MDBD).","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":" 42","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140998319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1051/epjconf/202429300013
Daniel de Andres, W. Cui, G. Yepes, M. Petris, G. Aversano, A. Ferragamo, Federico De Luca, A. J. Munoz
Galaxy clusters are composed of dark matter, gas and stars. Their dark matter component, which amounts to around 80% of the total mass, cannot be directly observed but traced by the distribution of diffused gas and galaxy members. In this work, we aim to infer the cluster’s projected total mass distribution from mock observational data, i.e. stars, Sunyaev-Zeldovich, and X-ray, by training deep learning models. To this end, we have created a multiview images dataset from The Three Hundred simulation that is optimal for training Machine Learning models. We further study deep learning architectures based on the U-Net to account for single-input and multi-input models. We show that the predicted mass distribution agrees well with the true one.
星系团由暗物质、气体和恒星组成。它们的暗物质部分约占总质量的80%,无法直接观测到,只能通过扩散气体和星系成员的分布来追踪。在这项工作中,我们的目标是通过训练深度学习模型,从模拟观测数据(即恒星、Sunyaev-Zeldovich 和 X 射线)中推断出星团的投影总质量分布。为此,我们从 "三百 "模拟中创建了一个多视图图像数据集,该数据集是训练机器学习模型的最佳选择。我们进一步研究了基于 U-Net 的深度学习架构,以考虑单输入和多输入模型。我们的研究表明,预测的质量分布与真实质量分布非常吻合。
{"title":"Generating galaxy clusters mass density maps from mock multiview images via deep learning","authors":"Daniel de Andres, W. Cui, G. Yepes, M. Petris, G. Aversano, A. Ferragamo, Federico De Luca, A. J. Munoz","doi":"10.1051/epjconf/202429300013","DOIUrl":"https://doi.org/10.1051/epjconf/202429300013","url":null,"abstract":"Galaxy clusters are composed of dark matter, gas and stars. Their dark matter component, which amounts to around 80% of the total mass, cannot be directly observed but traced by the distribution of diffused gas and galaxy members. In this work, we aim to infer the cluster’s projected total mass distribution from mock observational data, i.e. stars, Sunyaev-Zeldovich, and X-ray, by training deep learning models. To this end, we have created a multiview images dataset from The Three Hundred simulation that is optimal for training Machine Learning models. We further study deep learning architectures based on the U-Net to account for single-input and multi-input models. We show that the predicted mass distribution agrees well with the true one.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"86 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140728990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1051/epjconf/202429300050
E. M. Silich, E. Bellomi, J. Sayers, J. Zuhone, U. Chadayammuri, S. Golwala, D. Hughes, A. Montaña, T. Mroczkowski, D. Nagai, D. Sánchez, S. Stanford, G. Wilson, M. Zemcov, A. Zitrin
Galaxy cluster mergers are representative of a wide range of physics, making them an excellent probe of the properties of dark matter and the ionized plasma of the intracluster medium. To date, most studies have focused on mergers occurring in the plane of the sky, where morphological features can be readily identified. To allow study of mergers with arbitrary orientation, we have assembled multi-probe data for the eight-cluster ICM-SHOX sample sensitive to both morphology and line of sight velocity. The first ICM-SHOX paper [1] provided an overview of our methodology applied to one member of the sample, MACS J0018.5+1626, in order to constrain its merger geometry. That work resulted in an exciting new discovery of a velocity space decoupling of its gas and dark matter distributions. In this work, we describe the availability and quality of multi-probe data for the full ICM-SHOX galaxy cluster sample. These datasets will form the observational basis of an upcoming full ICM-SHOX galaxy cluster sample analysis.
{"title":"Improved Constraints on Mergers with SZ, Hydrodynamical simulations, Optical, and X-ray (ICM-SHOX)","authors":"E. M. Silich, E. Bellomi, J. Sayers, J. Zuhone, U. Chadayammuri, S. Golwala, D. Hughes, A. Montaña, T. Mroczkowski, D. Nagai, D. Sánchez, S. Stanford, G. Wilson, M. Zemcov, A. Zitrin","doi":"10.1051/epjconf/202429300050","DOIUrl":"https://doi.org/10.1051/epjconf/202429300050","url":null,"abstract":"Galaxy cluster mergers are representative of a wide range of physics, making them an excellent probe of the properties of dark matter and the ionized plasma of the intracluster medium. To date, most studies have focused on mergers occurring in the plane of the sky, where morphological features can be readily identified. To allow study of mergers with arbitrary orientation, we have assembled multi-probe data for the eight-cluster ICM-SHOX sample sensitive to both morphology and line of sight velocity. The first ICM-SHOX paper [1] provided an overview of our methodology applied to one member of the sample, MACS J0018.5+1626, in order to constrain its merger geometry. That work resulted in an exciting new discovery of a velocity space decoupling of its gas and dark matter distributions. In this work, we describe the availability and quality of multi-probe data for the full ICM-SHOX galaxy cluster sample. These datasets will form the observational basis of an upcoming full ICM-SHOX galaxy cluster sample analysis.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"12 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140738705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}