首页 > 最新文献

EPJ Web of Conferences最新文献

英文 中文
Heavy flavor and quarkonia results from the PHENIX experiment PHENIX实验的重味和夸克态结果
Pub Date : 2024-07-02 DOI: 10.1051/epjconf/202429609016
Krista Smith
The PHENIX experiment at RHIC has a unique large rapidity coverage (1.2 < |y| < 2.2) for heavy flavor studies in heavy-ion collisions. This kinematic region has a smaller particle density and may undergo different nuclear effects before and after the hard process when compared to mid-rapidity production. The latest PHENIX runs contain a large data set which allows, for the first time, the study of heavy flavor and J/ψ flow at the large rapidity region in Au+Au collisions at √SNN =200 GeV. This measurement has the potential to reveal a medium evolution distinct from that known at mid-rapidity.
位于 RHIC 的 PHENIX 实验具有独特的大速率覆盖范围(1.2 < |y| < 2.2),可用于重离子碰撞中的重味道研究。这一运动学区域的粒子密度较小,与中速产生相比,在硬过程前后可能会经历不同的核效应。最新的PHENIX运行包含了一个大型数据集,首次允许在√SNN =200 GeV的Au+Au对撞中研究大速率区的重味道和J/ψ流。这种测量有可能揭示出与中速时不同的介质演化。
{"title":"Heavy flavor and quarkonia results from the PHENIX experiment","authors":"Krista Smith","doi":"10.1051/epjconf/202429609016","DOIUrl":"https://doi.org/10.1051/epjconf/202429609016","url":null,"abstract":"The PHENIX experiment at RHIC has a unique large rapidity coverage (1.2 < |y| < 2.2) for heavy flavor studies in heavy-ion collisions. This kinematic region has a smaller particle density and may undergo different nuclear effects before and after the hard process when compared to mid-rapidity production. The latest PHENIX runs contain a large data set which allows, for the first time, the study of heavy flavor and J/ψ flow at the large rapidity region in Au+Au collisions at √SNN =200 GeV. This measurement has the potential to reveal a medium evolution distinct from that known at mid-rapidity.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"27 28","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141685581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The ups and downs of inferred cosmological lithium 推断宇宙学锂的起伏
Pub Date : 2024-06-14 DOI: 10.1051/epjconf/202429701007
Andreas Korn
I summarize the stellar side of the cosmological lithium problem(s). Evidence from independent studies is accumulating and indicates that stars may very well be fully responsible for lowering their surface lithium from the predicted primordial value to observed levels through internal element-transport mechanisms collectively referred to as atomic diffusion.While atomic diffusion can be modelled from first principles, stellar evolution uses a parametrized representation of convection making it impossible to predict convective-boundary mixing as a vital stellar process moderating atomic diffusion. More work is clearly needed here for a fully quantitative picture of lithium (and metallicity) evolution as stars age.Lastly, note that inferred stellar lithium-6 abundances have all but disappeared.
我总结了宇宙学锂问题的恒星方面。来自独立研究的证据正在不断积累,这些证据表明,恒星很可能通过统称为原子扩散的内部元素传输机制,将其表面锂从预测的原始值降低到观测水平。要想全面定量地了解恒星衰老过程中锂(和金属性)的演变,显然还需要做更多的工作。
{"title":"The ups and downs of inferred cosmological lithium","authors":"Andreas Korn","doi":"10.1051/epjconf/202429701007","DOIUrl":"https://doi.org/10.1051/epjconf/202429701007","url":null,"abstract":"I summarize the stellar side of the cosmological lithium problem(s). Evidence from independent studies is accumulating and indicates that stars may very well be fully responsible for lowering their surface lithium from the predicted primordial value to observed levels through internal element-transport mechanisms collectively referred to as atomic diffusion.\u0000While atomic diffusion can be modelled from first principles, stellar evolution uses a parametrized representation of convection making it impossible to predict convective-boundary mixing as a vital stellar process moderating atomic diffusion. More work is clearly needed here for a fully quantitative picture of lithium (and metallicity) evolution as stars age.\u0000Lastly, note that inferred stellar lithium-6 abundances have all but disappeared.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"100 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141342299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adoption of a token-based authentication model for the CMS Submission Infrastructure 为内容管理系统提交基础设施采用基于令牌的身份验证模式
Pub Date : 2024-05-23 DOI: 10.1051/epjconf/202429504003
A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem, Frank Wurthwein
The CMS Submission Infrastructure (SI) is the main computing resource provisioning system for CMS workloads. A number of HTCondor pools are employed to manage this infrastructure, which aggregates geographically distributed resources from the WLCG and other providers. Historically, the model of authentication among the diverse components of this infrastructure has relied on the Grid Security Infrastructure (GSI), based on identities and X509 certificates. In contrast, commonly used modern authentication standards are based on capabilities and tokens. The WLCG has identified this trend and aims at a transparent replacement of GSI for all its workload management, data transfer and storage access operations, to be completed during the current LHC Run 3. As part of this effort, and within the context of CMS computing, the Submission Infrastructure group is in the process of phasing out the GSI part of its authentication layers, in favor of IDTokens and Scitokens. The use of tokens is already well integrated into the HTCondor Software Suite, which has allowed us to fully migrate the authentication between internal components of SI. Additionally, recent versions of the HTCondor-CE support tokens as well, enabling CMS resource requests to Grid sites employing this CE technology to be granted by means of token exchange. After a rollout campaign to sites, successfully completed by the third quarter of 2022, the totality of HTCondor CEs in use by CMS are already receiving Scitoken-based pilot jobs. On the ARC CE side, a parallel campaign was launched to foster the adoption of the REST interface at CMS sites (required to enable token-based job submission via HTCondor-G), which is nearing completion as well. In this contribution, the newly adopted authentication model will be described. We will then report on the migration status and final steps towards complete GSI phase out in the CMS SI.
CMS 提交基础设施(SI)是 CMS 工作负载的主要计算资源供应系统。一些 HTCondor 池被用于管理该基础设施,它汇集了来自 WLCG 和其他供应商的地理分布资源。该基础设施不同组件之间的身份验证模式一直依赖于基于身份和 X509 证书的网格安全基础设施(GSI)。相比之下,常用的现代身份验证标准则以能力和令牌为基础。WLCG 已经发现了这一趋势,并计划在目前的大型强子对撞机运行 3 期间,为其所有工作负载管理、数据传输和存储访问操作透明地替换 GSI。作为这项工作的一部分,在 CMS 计算的背景下,提交基础设施小组正在逐步淘汰其身份验证层的 GSI 部分,转而使用 IDTokens 和 Scitokens。令牌的使用已经很好地集成到 HTCondor 软件套件中,这使我们能够在 SI 内部组件之间完全迁移身份验证。此外,HTCondor-CE 的最新版本也支持令牌,这样就可以通过令牌交换的方式,批准采用这种 CE 技术的网格站点的 CMS 资源请求。在 2022 年第三季度成功完成对站点的推广活动后,CMS 正在使用的所有 HTCondor CE 已经开始接收基于 Scitoken 的试点工作。在 ARC CE 方面,还发起了一项并行活动,以促进 CMS 站点采用 REST 接口(这是通过 HTCondor-G 提交基于令牌的工作所必需的),这项活动也已接近尾声。本文将介绍新采用的身份验证模型。然后,我们将报告迁移情况以及在 CMS SI 中完全淘汰 GSI 的最后步骤。
{"title":"Adoption of a token-based authentication model for the CMS Submission Infrastructure","authors":"A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem, Frank Wurthwein","doi":"10.1051/epjconf/202429504003","DOIUrl":"https://doi.org/10.1051/epjconf/202429504003","url":null,"abstract":"The CMS Submission Infrastructure (SI) is the main computing resource provisioning system for CMS workloads. A number of HTCondor pools are employed to manage this infrastructure, which aggregates geographically distributed resources from the WLCG and other providers. Historically, the model of authentication among the diverse components of this infrastructure has relied on the Grid Security Infrastructure (GSI), based on identities and X509 certificates. In contrast, commonly used modern authentication standards are based on capabilities and tokens. The WLCG has identified this trend and aims at a transparent replacement of GSI for all its workload management, data transfer and storage access operations, to be completed during the current LHC Run 3. As part of this effort, and within the context of CMS computing, the Submission Infrastructure group is in the process of phasing out the GSI part of its authentication layers, in favor of IDTokens and Scitokens. The use of tokens is already well integrated into the HTCondor Software Suite, which has allowed us to fully migrate the authentication between internal components of SI. Additionally, recent versions of the HTCondor-CE support tokens as well, enabling CMS resource requests to Grid sites employing this CE technology to be granted by means of token exchange. After a rollout campaign to sites, successfully completed by the third quarter of 2022, the totality of HTCondor CEs in use by CMS are already receiving Scitoken-based pilot jobs. On the ARC CE side, a parallel campaign was launched to foster the adoption of the REST interface at CMS sites (required to enable token-based job submission via HTCondor-G), which is nearing completion as well. In this contribution, the newly adopted authentication model will be described. We will then report on the migration status and final steps towards complete GSI phase out in the CMS SI.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141106376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The integration of heterogeneous resources in the CMS Submission Infrastructure for the LHC Run 3 and beyond 大型强子对撞机运行 3 及以后的 CMS 提交基础设施中的异构资源整合
Pub Date : 2024-05-23 DOI: 10.1051/epjconf/202429504046
A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem
While the computing landscape supporting LHC experiments is currently dominated by x86 processors at WLCG sites, this configuration will evolve in the coming years. LHC collaborations will be increasingly employing HPC and Cloud facilities to process the vast amounts of data expected during the LHC Run 3 and the future HL-LHC phase. These facilities often feature diverse compute resources, including alternative CPU architectures like ARM and IBM Power, as well as a variety of GPU specifications. Using these heterogeneous resources efficiently is thus essential for the LHC collaborations reaching their future scientific goals. The Submission Infrastructure (SI) is a central element in CMS Computing, enabling resource acquisition and exploitation by CMS data processing, simulation and analysis tasks. The SI must therefore be adapted to ensure access and optimal utilization of this heterogeneous compute capacity. Some steps in this evolution have been already taken, as CMS is currently using opportunistically a small pool of GPU slots provided mainly at the CMS WLCG sites. Additionally, Power9 processors have been validated for CMS production at the Marconi-100 cluster at CINECA. This note will describe the updated capabilities of the SI to continue ensuring the efficient allocation and use of computing resources by CMS, despite their increasing diversity. The next steps towards a full integration and support of heterogeneous resources according to CMS needs will also be reported.
尽管支持大型强子对撞机实验的计算环境目前主要由WLCG场址的X86处理器主导,但这种配置在未来几年内还将不断演变。大型强子对撞机合作组织将越来越多地采用高性能计算和云计算设施来处理大型强子对撞机运行 3 阶段和未来 HL-LHC 阶段的大量数据。这些设施通常具有多种计算资源,包括 ARM 和 IBM Power 等替代 CPU 架构以及各种 GPU 规格。因此,高效利用这些异构资源对于大型强子对撞机合作组织实现未来的科学目标至关重要。提交基础设施(SI)是 CMS 计算的核心要素,它使 CMS 数据处理、模拟和分析任务能够获取和利用资源。因此,必须对 SI 进行调整,以确保对这种异构计算能力的访问和优化利用。目前,CMS 正在利用主要由 CMS WLCG 站点提供的少量 GPU 插槽。此外,Power9 处理器已经通过了 CINECA 马可尼-100 集群的 CMS 生产验证。本说明将介绍 SI 的最新功能,以继续确保 CMS 有效分配和使用计算资源,尽管这些资源日益多样化。此外,还将报告根据 CMS 需求全面整合和支持异构资源的下一步措施。
{"title":"The integration of heterogeneous resources in the CMS Submission Infrastructure for the LHC Run 3 and beyond","authors":"A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem","doi":"10.1051/epjconf/202429504046","DOIUrl":"https://doi.org/10.1051/epjconf/202429504046","url":null,"abstract":"While the computing landscape supporting LHC experiments is currently dominated by x86 processors at WLCG sites, this configuration will evolve in the coming years. LHC collaborations will be increasingly employing HPC and Cloud facilities to process the vast amounts of data expected during the LHC Run 3 and the future HL-LHC phase. These facilities often feature diverse compute resources, including alternative CPU architectures like ARM and IBM Power, as well as a variety of GPU specifications. Using these heterogeneous resources efficiently is thus essential for the LHC collaborations reaching their future scientific goals. The Submission Infrastructure (SI) is a central element in CMS Computing, enabling resource acquisition and exploitation by CMS data processing, simulation and analysis tasks. The SI must therefore be adapted to ensure access and optimal utilization of this heterogeneous compute capacity. Some steps in this evolution have been already taken, as CMS is currently using opportunistically a small pool of GPU slots provided mainly at the CMS WLCG sites. Additionally, Power9 processors have been validated for CMS production at the Marconi-100 cluster at CINECA. This note will describe the updated capabilities of the SI to continue ensuring the efficient allocation and use of computing resources by CMS, despite their increasing diversity. The next steps towards a full integration and support of heterogeneous resources according to CMS needs will also be reported.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"7 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141107025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Repurposing of the Run 2 CMS High Level Trigger Infrastructure as a Cloud Resource for Offline Computing 将 Run 2 CMS 高级触发器基础设施重新用作离线计算的云资源
Pub Date : 2024-05-23 DOI: 10.1051/epjconf/202429503036
M. Mascheroni, A. P. Yzquierdo, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem, D. Spiga, C. Wissing, Frank Wurthwein
The former CMS Run 2 High Level Trigger (HLT) farm is one of the largest contributors to CMS compute resources, providing about 25k job slots for offline computing. This CPU farm was initially employed as an opportunistic resource, exploited during inter-fill periods, in the LHC Run 2. Since then, it has become a nearly transparent extension of the CMS capacity at CERN, being located on-site at the LHC interaction point 5 (P5), where the CMS detector is installed. This resource has been configured to support the execution of critical CMS tasks, such as prompt detector data reconstruction. It can therefore be used in combination with the dedicated Tier 0 capacity at CERN, in order to process and absorb peaks in the stream of data coming from the CMS detector. The initial configuration for this resource, based on statically configured VMs, provided the required level of functionality. However, regular operations of this cluster revealed certain limitations compared to the resource provisioning and use model employed in the case of WLCG sites. A new configuration, based on a vacuum-like model, has been implemented for this resource in order to solve the detected shortcomings. This paper reports about this redeployment work on the permanent cloud for an enhanced support to CMS offline computing, comparing the former and new models’ respective functionalities, along with the commissioning effort for the new setup.
前 CMS 运行 2 高级触发器(HLT)场是 CMS 计算资源的最大贡献者之一,为离线计算提供了约 25k 个工作槽。在大型强子对撞机运行 2 中,该 CPU 场最初是作为一种机会性资源,在间歇期加以利用。从那时起,它几乎成为欧洲核子研究中心 CMS 能力的透明扩展,位于大型强子对撞机相互作用点 5 (P5),CMS 探测器就安装在这里。该资源已被配置为支持 CMS 关键任务的执行,如及时重建探测器数据。因此,它可以与欧洲核子研究中心的专用 0 级容量结合使用,以处理和吸收来自 CMS 探测器的数据流中的峰值。该资源的初始配置基于静态配置的虚拟机,提供了所需的功能水平。然而,与世界联络小组站点所采用的资源配置和使用模式相比,该集群的正常运行暴露出一定的局限性。为了解决发现的问题,我们为该资源实施了基于类真空模型的新配置。本文报告了在永久云上的重新部署工作,以加强对 CMS 离线计算的支持,比较了新旧模式各自的功能,以及新设置的调试工作。
{"title":"Repurposing of the Run 2 CMS High Level Trigger Infrastructure as a Cloud Resource for Offline Computing","authors":"M. Mascheroni, A. P. Yzquierdo, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem, D. Spiga, C. Wissing, Frank Wurthwein","doi":"10.1051/epjconf/202429503036","DOIUrl":"https://doi.org/10.1051/epjconf/202429503036","url":null,"abstract":"The former CMS Run 2 High Level Trigger (HLT) farm is one of the largest contributors to CMS compute resources, providing about 25k job slots for offline computing. This CPU farm was initially employed as an opportunistic resource, exploited during inter-fill periods, in the LHC Run 2. Since then, it has become a nearly transparent extension of the CMS capacity at CERN, being located on-site at the LHC interaction point 5 (P5), where the CMS detector is installed. This resource has been configured to support the execution of critical CMS tasks, such as prompt detector data reconstruction. It can therefore be used in combination with the dedicated Tier 0 capacity at CERN, in order to process and absorb peaks in the stream of data coming from the CMS detector. The initial configuration for this resource, based on statically configured VMs, provided the required level of functionality. However, regular operations of this cluster revealed certain limitations compared to the resource provisioning and use model employed in the case of WLCG sites. A new configuration, based on a vacuum-like model, has been implemented for this resource in order to solve the detected shortcomings. This paper reports about this redeployment work on the permanent cloud for an enhanced support to CMS offline computing, comparing the former and new models’ respective functionalities, along with the commissioning effort for the new setup.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"51 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141103068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HPC resources for CMS offline computing: An integration and scalability challenge for the Submission Infrastructure 用于内容管理系统离线计算的高性能计算资源:提交基础设施面临的集成和可扩展性挑战
Pub Date : 2024-05-23 DOI: 10.1051/epjconf/202429501035
A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem
The computing resource needs of LHC experiments are expected to continue growing significantly during the Run 3 and into the HL-LHC era. The landscape of available resources will also evolve, as High Performance Computing (HPC) and Cloud resources will provide a comparable, or even dominant, fraction of the total compute capacity. The future years present a challenge for the experiments’ resource provisioning models, both in terms of scalability and increasing complexity. The CMS Submission Infrastructure (SI) provisions computing resources for CMS workflows. This infrastructure is built on a set of federated HTCondor pools, currently aggregating 400k CPU cores distributed worldwide and supporting the simultaneous execution of over 200k computing tasks. Incorporating HPC resources into CMS computing represents firstly an integration challenge, as HPC centers are much more diverse compared to Grid sites. Secondly, evolving the present SI, dimensioned to harness the current CMS computing capacity, to reach the resource scales required for the HLLHC phase, while maintaining global flexibility and efficiency, will represent an additional challenge for the SI. To preventively address future potential scalability limits, the SI team regularly runs tests to explore the maximum reach of our infrastructure. In this note, the integration of HPC resources into CMS offline computing is summarized, the potential concerns for the SI derived from the increased scale of operations are described, and the most recent results of scalability test on the CMS SI are reported.
大型强子对撞机实验的计算资源需求预计将在运行3阶段和HL-LHC时代继续大幅增长。随着高性能计算(HPC)和云计算资源在总计算能力中所占比例越来越大,可用资源的格局也将发生变化。未来几年,无论是在可扩展性还是在日益增加的复杂性方面,实验的资源供应模式都将面临挑战。CMS 提交基础设施(SI)为 CMS 工作流提供计算资源。该基础设施建立在一组联合 HTCondor 池的基础上,目前共有 40 万个 CPU 内核,分布在全球各地,支持同时执行 20 多万个计算任务。将高性能计算资源纳入 CMS 计算首先是一项整合挑战,因为与网格站点相比,高性能计算中心更加多样化。其次,在保持全局灵活性和效率的前提下,将目前的 SI(利用当前 CMS 计算能力的尺寸)发展到 HLLHC 阶段所需的资源规模,将是 SI 面临的另一项挑战。为了预防性地解决未来潜在的可扩展性限制,SI 团队定期进行测试,以探索我们基础设施的最大范围。本说明概述了将高性能计算资源整合到 CMS 离线计算中的情况,描述了因运行规模扩大而可能引起的 SI 问题,并报告了 CMS SI 可扩展性测试的最新结果。
{"title":"HPC resources for CMS offline computing: An integration and scalability challenge for the Submission Infrastructure","authors":"A. P. Yzquierdo, M. Mascheroni, Edita Kizinevič, F. Khan, Hyunwoo Kim, M. A. Flechas, Nikos Tsipinakis, Saqib Haleem","doi":"10.1051/epjconf/202429501035","DOIUrl":"https://doi.org/10.1051/epjconf/202429501035","url":null,"abstract":"The computing resource needs of LHC experiments are expected to continue growing significantly during the Run 3 and into the HL-LHC era. The landscape of available resources will also evolve, as High Performance Computing (HPC) and Cloud resources will provide a comparable, or even dominant, fraction of the total compute capacity. The future years present a challenge for the experiments’ resource provisioning models, both in terms of scalability and increasing complexity. The CMS Submission Infrastructure (SI) provisions computing resources for CMS workflows. This infrastructure is built on a set of federated HTCondor pools, currently aggregating 400k CPU cores distributed worldwide and supporting the simultaneous execution of over 200k computing tasks. Incorporating HPC resources into CMS computing represents firstly an integration challenge, as HPC centers are much more diverse compared to Grid sites. Secondly, evolving the present SI, dimensioned to harness the current CMS computing capacity, to reach the resource scales required for the HLLHC phase, while maintaining global flexibility and efficiency, will represent an additional challenge for the SI. To preventively address future potential scalability limits, the SI team regularly runs tests to explore the maximum reach of our infrastructure. In this note, the integration of HPC resources into CMS offline computing is summarized, the potential concerns for the SI derived from the increased scale of operations are described, and the most recent results of scalability test on the CMS SI are reported.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"38 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141103790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallelizing Air Shower Simulation for Background Characterization in IceCube 为冰立方背景特征描述进行并行气流喷淋模拟
Pub Date : 2024-05-09 DOI: 10.1051/epjconf/202429511016
K. Meagher, J. Santen
The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. For every observed neutrino event, there are over 106 background events caused by cosmic ray air shower muons. In order to properly separate signal from background, it is necessary to produce Monte Carlo simulations of these air showers. Although to-date, IceCube has produced large quantities of background simulation, these studies still remain statistics limited. The first stage of simulation requires heavy CPU usage while the second stage requires heavy GPU usage. Processing both of these stages on the same node will result in an underutilized GPU but using different nodes will encounter bandwidth bottlenecks. Furthermore, due to the power-law energy spectrum of cosmic rays, the memory footprint of the detector response often exceeded the limit in unpredictable ways. This proceeding presents new client–server code which parallelizes the first stage onto multiple CPUs on the same node and then passes it on to the GPU for photon propagation. This results in GPU utilization of greater than 90% as well as more predictable memory usage and an overall factor of 20 improvement in speed over previous techniques.
冰立方中微子天文台是一个立方公里的中微子望远镜,位于地理南极。每观测到一个中微子事件,就会有超过 106 个由宇宙射线气雨μ介子引起的背景事件。为了正确分离信号和背景,有必要对这些气雨进行蒙特卡洛模拟。尽管到目前为止,冰立方已经进行了大量的背景模拟,但这些研究仍然受到统计数字的限制。第一阶段的模拟需要大量使用 CPU,而第二阶段则需要大量使用 GPU。在同一节点上处理这两个阶段会导致GPU利用率不足,但使用不同节点又会遇到带宽瓶颈。此外,由于宇宙射线的幂律能谱,探测器响应的内存占用经常以不可预测的方式超出极限。本论文提出了新的客户端-服务器代码,它将第一阶段并行到同一节点上的多个 CPU 上,然后将其传递给 GPU 进行光子传播。这使得 GPU 的利用率超过 90%,内存的使用更可预测,速度比以前的技术总体提高了 20 倍。
{"title":"Parallelizing Air Shower Simulation for Background Characterization in IceCube","authors":"K. Meagher, J. Santen","doi":"10.1051/epjconf/202429511016","DOIUrl":"https://doi.org/10.1051/epjconf/202429511016","url":null,"abstract":"The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. For every observed neutrino event, there are over 106 background events caused by cosmic ray air shower muons. In order to properly separate signal from background, it is necessary to produce Monte Carlo simulations of these air showers. Although to-date, IceCube has produced large quantities of background simulation, these studies still remain statistics limited. The first stage of simulation requires heavy CPU usage while the second stage requires heavy GPU usage. Processing both of these stages on the same node will result in an underutilized GPU but using different nodes will encounter bandwidth bottlenecks. Furthermore, due to the power-law energy spectrum of cosmic rays, the memory footprint of the detector response often exceeded the limit in unpredictable ways. This proceeding presents new client–server code which parallelizes the first stage onto multiple CPUs on the same node and then passes it on to the GPU for photon propagation. This results in GPU utilization of greater than 90% as well as more predictable memory usage and an overall factor of 20 improvement in speed over previous techniques.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":" 14","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formal theory of heavy ion double charge exchange reactions 重离子双电荷交换反应的形式理论
Pub Date : 2024-05-08 DOI: 10.1051/epjconf/202429204001
H. Lenske, J. Bellone, M. Colonna, Danilo Gambacurta, José-Antonio Lay
The theory of heavy ion double charge exchange (DCE) reactions A(Z, N) → A(Z ± 2, N ∓ 2) is recapitulated emphasizing the role of Double Single Charge Exchange (DSCE) and pion-nucleon Majorana DCE (MDCE) reactions. DSCE reactions are of second–order distorted wave character, mediated by isovector nucleon-nucleon (NN) interactions. The DSCE response functions resemble the nuclear matrix elements (NME) of 2ν2β decay. The MDCE process proceeds by a dynamically generated effective rank-2 isotensor interaction, defined by off–shell pion–nucleon DCE scattering. In closure approximation pion potentials and two–nucleon correlations are obtained, similar to the neutrino potentials and the intranuclear exchange of Majorana neutrinos in 0ν2β Majorana double beta decay (MDBD).
重述了重离子双电荷交换(DCE)反应 A(Z, N) → A(Z ± 2, N ∓ 2) 的理论,强调了双单电荷交换(DSCE)和先锋-核子马约拉纳 DCE(MDCE)反应的作用。双单电荷交换反应具有二阶扭曲波特性,由等矢量核子-核子(NN)相互作用介导。DSCE 反应函数类似于 2ν2β 衰变的核矩阵元素(NME)。MDCE 过程是通过动态生成的有效秩-2 等张量相互作用进行的,由壳外先驱-核子 DCE 散射定义。在闭合近似中得到了先驱势和双核相关性,类似于 0ν2β 马约拉纳双贝塔衰变(MDBD)中的中微子势和马约拉纳中微子的核内交换。
{"title":"Formal theory of heavy ion double charge exchange reactions","authors":"H. Lenske, J. Bellone, M. Colonna, Danilo Gambacurta, José-Antonio Lay","doi":"10.1051/epjconf/202429204001","DOIUrl":"https://doi.org/10.1051/epjconf/202429204001","url":null,"abstract":"The theory of heavy ion double charge exchange (DCE) reactions A(Z, N) → A(Z ± 2, N ∓ 2) is recapitulated emphasizing the role of Double Single Charge Exchange (DSCE) and pion-nucleon Majorana DCE (MDCE) reactions. DSCE reactions are of second–order distorted wave character, mediated by isovector nucleon-nucleon (NN) interactions. The DSCE response functions resemble the nuclear matrix elements (NME) of 2ν2β decay. The MDCE process proceeds by a dynamically generated effective rank-2 isotensor interaction, defined by off–shell pion–nucleon DCE scattering. In closure approximation pion potentials and two–nucleon correlations are obtained, similar to the neutrino potentials and the intranuclear exchange of Majorana neutrinos in 0ν2β Majorana double beta decay (MDBD).","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":" 42","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140998319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Generating galaxy clusters mass density maps from mock multiview images via deep learning 通过深度学习从模拟多视角图像生成星系团质量密度图
Pub Date : 2024-04-08 DOI: 10.1051/epjconf/202429300013
Daniel de Andres, W. Cui, G. Yepes, M. Petris, G. Aversano, A. Ferragamo, Federico De Luca, A. J. Munoz
Galaxy clusters are composed of dark matter, gas and stars. Their dark matter component, which amounts to around 80% of the total mass, cannot be directly observed but traced by the distribution of diffused gas and galaxy members. In this work, we aim to infer the cluster’s projected total mass distribution from mock observational data, i.e. stars, Sunyaev-Zeldovich, and X-ray, by training deep learning models. To this end, we have created a multiview images dataset from The Three Hundred simulation that is optimal for training Machine Learning models. We further study deep learning architectures based on the U-Net to account for single-input and multi-input models. We show that the predicted mass distribution agrees well with the true one.
星系团由暗物质、气体和恒星组成。它们的暗物质部分约占总质量的80%,无法直接观测到,只能通过扩散气体和星系成员的分布来追踪。在这项工作中,我们的目标是通过训练深度学习模型,从模拟观测数据(即恒星、Sunyaev-Zeldovich 和 X 射线)中推断出星团的投影总质量分布。为此,我们从 "三百 "模拟中创建了一个多视图图像数据集,该数据集是训练机器学习模型的最佳选择。我们进一步研究了基于 U-Net 的深度学习架构,以考虑单输入和多输入模型。我们的研究表明,预测的质量分布与真实质量分布非常吻合。
{"title":"Generating galaxy clusters mass density maps from mock multiview images via deep learning","authors":"Daniel de Andres, W. Cui, G. Yepes, M. Petris, G. Aversano, A. Ferragamo, Federico De Luca, A. J. Munoz","doi":"10.1051/epjconf/202429300013","DOIUrl":"https://doi.org/10.1051/epjconf/202429300013","url":null,"abstract":"Galaxy clusters are composed of dark matter, gas and stars. Their dark matter component, which amounts to around 80% of the total mass, cannot be directly observed but traced by the distribution of diffused gas and galaxy members. In this work, we aim to infer the cluster’s projected total mass distribution from mock observational data, i.e. stars, Sunyaev-Zeldovich, and X-ray, by training deep learning models. To this end, we have created a multiview images dataset from The Three Hundred simulation that is optimal for training Machine Learning models. We further study deep learning architectures based on the U-Net to account for single-input and multi-input models. We show that the predicted mass distribution agrees well with the true one.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"86 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140728990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Constraints on Mergers with SZ, Hydrodynamical simulations, Optical, and X-ray (ICM-SHOX) 利用 SZ、流体力学模拟、光学和 X 射线(ICM-SHOX)改进对合并的约束
Pub Date : 2024-04-05 DOI: 10.1051/epjconf/202429300050
E. M. Silich, E. Bellomi, J. Sayers, J. Zuhone, U. Chadayammuri, S. Golwala, D. Hughes, A. Montaña, T. Mroczkowski, D. Nagai, D. Sánchez, S. Stanford, G. Wilson, M. Zemcov, A. Zitrin
Galaxy cluster mergers are representative of a wide range of physics, making them an excellent probe of the properties of dark matter and the ionized plasma of the intracluster medium. To date, most studies have focused on mergers occurring in the plane of the sky, where morphological features can be readily identified. To allow study of mergers with arbitrary orientation, we have assembled multi-probe data for the eight-cluster ICM-SHOX sample sensitive to both morphology and line of sight velocity. The first ICM-SHOX paper [1] provided an overview of our methodology applied to one member of the sample, MACS J0018.5+1626, in order to constrain its merger geometry. That work resulted in an exciting new discovery of a velocity space decoupling of its gas and dark matter distributions. In this work, we describe the availability and quality of multi-probe data for the full ICM-SHOX galaxy cluster sample. These datasets will form the observational basis of an upcoming full ICM-SHOX galaxy cluster sample analysis.
星系团合并代表了广泛的物理学,使其成为暗物质特性和星系团内介质电离等离子体的绝佳探测器。迄今为止,大多数研究都集中在发生在天空平面上的合并,因为在天空平面上的合并形态特征很容易识别。为了研究任意方向的合并,我们为八个星团 ICM-SHOX 样本收集了对形态和视线速度都敏感的多探测器数据。第一篇ICM-SHOX论文[1]概述了我们应用于样本中一个成员--MACS J0018.5+1626--的方法,以限制其合并的几何形状。这项工作带来了一个令人兴奋的新发现,即其气体和暗物质分布的速度空间解耦。在这项工作中,我们介绍了全部 ICM-SHOX 星系群样本的多探测器数据的可用性和质量。这些数据集将成为即将进行的ICM-SHOX星系团全样本分析的观测基础。
{"title":"Improved Constraints on Mergers with SZ, Hydrodynamical simulations, Optical, and X-ray (ICM-SHOX)","authors":"E. M. Silich, E. Bellomi, J. Sayers, J. Zuhone, U. Chadayammuri, S. Golwala, D. Hughes, A. Montaña, T. Mroczkowski, D. Nagai, D. Sánchez, S. Stanford, G. Wilson, M. Zemcov, A. Zitrin","doi":"10.1051/epjconf/202429300050","DOIUrl":"https://doi.org/10.1051/epjconf/202429300050","url":null,"abstract":"Galaxy cluster mergers are representative of a wide range of physics, making them an excellent probe of the properties of dark matter and the ionized plasma of the intracluster medium. To date, most studies have focused on mergers occurring in the plane of the sky, where morphological features can be readily identified. To allow study of mergers with arbitrary orientation, we have assembled multi-probe data for the eight-cluster ICM-SHOX sample sensitive to both morphology and line of sight velocity. The first ICM-SHOX paper [1] provided an overview of our methodology applied to one member of the sample, MACS J0018.5+1626, in order to constrain its merger geometry. That work resulted in an exciting new discovery of a velocity space decoupling of its gas and dark matter distributions. In this work, we describe the availability and quality of multi-probe data for the full ICM-SHOX galaxy cluster sample. These datasets will form the observational basis of an upcoming full ICM-SHOX galaxy cluster sample analysis.","PeriodicalId":11731,"journal":{"name":"EPJ Web of Conferences","volume":"12 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140738705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
EPJ Web of Conferences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1