Pub Date : 2025-01-01Epub Date: 2025-05-21DOI: 10.1007/s41781-025-00139-2
T Evans, C Fitzpatrick, J Horswill
The upgraded Large Hadron Collider beauty (LHCb) experiment is the first detector based at a hadron collider using a fully software-based trigger. The first 'High Level Trigger' stage (HLT1) reduces the event rate from 30 MHz to approximately 1 MHz based on reconstruction criteria from the tracking system, and consists of trigger selections implemented on Graphics Processing Units (GPUs). These selections are further refined following the full offline-quality reconstruction at the second stage (HLT2) prior to saving for analysis. An automated bandwidth division has been performed to equitably divide this 1 MHz HLT1 Output Rate (OR) between the signals of interest to the LHCb physics program. This was achieved by optimizing a set of trigger selections that maximize efficiency for signals of interest to LHCb while keeping the total HLT1 readout capped to a maximum. The bandwidth division tool has been used to determine the optimal selection for 35 selection algorithms over 80 characteristic physics channels.
{"title":"An automated bandwidth division for the LHCb upgrade trigger.","authors":"T Evans, C Fitzpatrick, J Horswill","doi":"10.1007/s41781-025-00139-2","DOIUrl":"10.1007/s41781-025-00139-2","url":null,"abstract":"<p><p>The upgraded Large Hadron Collider beauty (LHCb) experiment is the first detector based at a hadron collider using a fully software-based trigger. The first 'High Level Trigger' stage (HLT1) reduces the event rate from 30 MHz to approximately 1 MHz based on reconstruction criteria from the tracking system, and consists of <math><mrow><mi>O</mi> <mo>(</mo> <mn>100</mn> <mo>)</mo></mrow> </math> trigger selections implemented on Graphics Processing Units (GPUs). These selections are further refined following the full offline-quality reconstruction at the second stage (HLT2) prior to saving for analysis. An automated bandwidth division has been performed to equitably divide this 1 MHz HLT1 Output Rate (OR) between the signals of interest to the LHCb physics program. This was achieved by optimizing a set of trigger selections that maximize efficiency for signals of interest to LHCb while keeping the total HLT1 readout capped to a maximum. The bandwidth division tool has been used to determine the optimal selection for 35 selection algorithms over 80 characteristic physics channels.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"9 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12095408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144143763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-08-04DOI: 10.1007/s41781-025-00144-5
Ahmed Abdelmotteleb, Alessandro Bertolin, Chris Burr, Ben Couturier, Ellinor Eckstein, Davide Fazzini, Nathan Grieser, Christophe Haen, Ryunosuke O'Neil, Eduardo Rodrigues, Nicole Skidmore, Mark Smith, Aidan R Wiederhold, Shunan Zhang
The LHCb detector underwent a comprehensive upgrade in preparation for the third data-taking run of the Large Hadron Collider (LHC), known as LHCb Upgrade I. With its increased data rate, Run 3 introduced considerable challenges in both data acquisition (online) and data processing and analysis (offline). The offline processing and analysis model was upgraded to handle the factor 30 increase in data volume and the associated demands of ever-growing datasets for analysis, led by the LHCb Data Processing and Analysis (DPA) project. This paper documents the LHCb "Sprucing" - the centralised offline data processing and selections - and "Analysis Productions" - the centralised and highly automated declarative nTuple production system. The DaVinci application used by analysis productions for tupling spruced data is described as well as the apd and lbconda tools for data retrieval and analysis environment configuration. These tools allow for greatly improved analyst workflows and analysis preservation. Finally, the approach to data processing and analysis in the High-Luminosity Large Hadron Collider (HL-LHC) era - LHCb Upgrade II - is discussed.
{"title":"The LHCb Sprucing and Analysis Productions.","authors":"Ahmed Abdelmotteleb, Alessandro Bertolin, Chris Burr, Ben Couturier, Ellinor Eckstein, Davide Fazzini, Nathan Grieser, Christophe Haen, Ryunosuke O'Neil, Eduardo Rodrigues, Nicole Skidmore, Mark Smith, Aidan R Wiederhold, Shunan Zhang","doi":"10.1007/s41781-025-00144-5","DOIUrl":"10.1007/s41781-025-00144-5","url":null,"abstract":"<p><p>The LHCb detector underwent a comprehensive upgrade in preparation for the third data-taking run of the Large Hadron Collider (LHC), known as LHCb Upgrade I. With its increased data rate, Run 3 introduced considerable challenges in both data acquisition (online) and data processing and analysis (offline). The offline processing and analysis model was upgraded to handle the factor 30 increase in data volume and the associated demands of ever-growing datasets for analysis, led by the LHCb Data Processing and Analysis (DPA) project. This paper documents the LHCb \"Sprucing\" - the centralised offline data processing and selections - and \"Analysis Productions\" - the centralised and highly automated declarative nTuple production system. The DaVinci application used by analysis productions for tupling spruced data is described as well as the apd and lbconda tools for data retrieval and analysis environment configuration. These tools allow for greatly improved analyst workflows and analysis preservation. Finally, the approach to data processing and analysis in the High-Luminosity Large Hadron Collider (HL-LHC) era - LHCb Upgrade II - is discussed.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"9 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12321665/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-03DOI: 10.1007/s41781-025-00150-7
J Brooke, E Clement, M Glowacki, S Paramesvaran, J Segal
The implementation of convolutional neural networks in programmable logic, for applications in fast online event selection at hadron colliders, is studied. In particular, an approach based on full event images for classification is studied, including hardware-aware optimisation of the network architecture, and evaluation of physics performance using simulated data. A range of network models are identified that can be implemented within resources of current FPGAs, as well as the stringent latency requirements of HL-LHC trigger systems. A candidate model that can be implemented in the CMS L1 trigger for HL-LHC is shown to be capable of excellent signal/background discrimination for a key HL-LHC channel, HH(bbbb), although the performance depends strongly on the degree of pile-up mitigation prior to image generation.
{"title":"FPGA Implementation of a CNN-Based Topological Trigger for HL-LHC.","authors":"J Brooke, E Clement, M Glowacki, S Paramesvaran, J Segal","doi":"10.1007/s41781-025-00150-7","DOIUrl":"10.1007/s41781-025-00150-7","url":null,"abstract":"<p><p>The implementation of convolutional neural networks in programmable logic, for applications in fast online event selection at hadron colliders, is studied. In particular, an approach based on full event images for classification is studied, including hardware-aware optimisation of the network architecture, and evaluation of physics performance using simulated data. A range of network models are identified that can be implemented within resources of current FPGAs, as well as the stringent latency requirements of HL-LHC trigger systems. A candidate model that can be implemented in the CMS L1 trigger for HL-LHC is shown to be capable of excellent signal/background discrimination for a key HL-LHC channel, HH(bbbb), although the performance depends strongly on the degree of pile-up mitigation prior to image generation.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"9 1","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12583343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145453399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-28DOI: 10.1007/s41781-025-00151-6
Nathan Allen Grieser, Eduardo Rodrigues, Niladri Sahoo, Shuqi Sheng, Nicole Skidmore, Mark Smith
The LHCb Stripping project is a pivotal component of the experiment's data processing framework, designed to refine vast volumes of collision data into manageable samples for offline analysis. It ensures the re-analysis of Runs 1 and 2 legacy data, maintains the software stack, and executes (re-)Stripping campaigns. As the focus shifts toward newer data sets, the project continues to optimize infrastructure for both legacy and live data processing. This paper provides a comprehensive overview of the Stripping framework, detailing its Python-configurable architecture, integration with LHCb computing systems, and large-scale campaign management. We highlight organizational advancements, such as GitLab-based workflows, continuous integration, automation, and parallelized processing, alongside computational challenges. Finally, we discuss lessons learned and outline a future road-map to sustain efficient access to valuable physics legacy data sets for the LHCb collaboration.
{"title":"The LHCb Stripping Project: Sustainable Legacy Data Processing for High-Energy Physics.","authors":"Nathan Allen Grieser, Eduardo Rodrigues, Niladri Sahoo, Shuqi Sheng, Nicole Skidmore, Mark Smith","doi":"10.1007/s41781-025-00151-6","DOIUrl":"10.1007/s41781-025-00151-6","url":null,"abstract":"<p><p>The LHCb Stripping project is a pivotal component of the experiment's data processing framework, designed to refine vast volumes of collision data into manageable samples for offline analysis. It ensures the re-analysis of Runs 1 and 2 legacy data, maintains the software stack, and executes (re-)Stripping campaigns. As the focus shifts toward newer data sets, the project continues to optimize infrastructure for both legacy and live data processing. This paper provides a comprehensive overview of the Stripping framework, detailing its Python-configurable architecture, integration with LHCb computing systems, and large-scale campaign management. We highlight organizational advancements, such as GitLab-based workflows, continuous integration, automation, and parallelized processing, alongside computational challenges. Finally, we discuss lessons learned and outline a future road-map to sustain efficient access to valuable physics legacy data sets for the LHCb collaboration.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"9 1","pages":"21"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12662921/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s41781-024-00120-5
Alexander Rogachev, Fedor Ratnikov
{"title":"Soft Margin Spectral Normalization for GANs","authors":"Alexander Rogachev, Fedor Ratnikov","doi":"10.1007/s41781-024-00120-5","DOIUrl":"https://doi.org/10.1007/s41781-024-00120-5","url":null,"abstract":"","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"25 9","pages":"1-13"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141684631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-23DOI: 10.1007/s41781-024-00114-3
T. Maeno, A. Alekseev, F. H. Barreiro Megino, Kaushik De, Wen Guan, E. Karavakis, A. Klimentov, T. Korchuganova, Fahui Lin, P. Nilsson, T. Wenaus, Zhaoyu Yang, Xin Zhao
{"title":"PanDA: Production and Distributed Analysis System","authors":"T. Maeno, A. Alekseev, F. H. Barreiro Megino, Kaushik De, Wen Guan, E. Karavakis, A. Klimentov, T. Korchuganova, Fahui Lin, P. Nilsson, T. Wenaus, Zhaoyu Yang, Xin Zhao","doi":"10.1007/s41781-024-00114-3","DOIUrl":"https://doi.org/10.1007/s41781-024-00114-3","url":null,"abstract":"","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"47 20","pages":"1-21"},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139603343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-07DOI: 10.1007/s41781-023-00112-x
Waleed Esmail, Jana Rieger, Jenny Taylor, Malin Bohman, Karin Schönning
{"title":"KinFit: A Kinematic Fitting Package for Hadron Physics Experiments","authors":"Waleed Esmail, Jana Rieger, Jenny Taylor, Malin Bohman, Karin Schönning","doi":"10.1007/s41781-023-00112-x","DOIUrl":"https://doi.org/10.1007/s41781-023-00112-x","url":null,"abstract":"","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"29 3","pages":"1-21"},"PeriodicalIF":0.0,"publicationDate":"2024-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139448706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-02DOI: 10.1007/s41781-023-00108-7
Alexander Barnyakov, M. Belozyorova, V. Bobrovnikov, Sergey Kononov, D. Kyshtymov, Dmitry Maksimov, Georgiy Razuvaev, A. Sukharev, Korneliy Todyshev, Vitaliy Vorobyev, Anastasiia Zhadan, D. Zhadan
{"title":"Fast Simulation for the Super Charm-Tau Factory Detector","authors":"Alexander Barnyakov, M. Belozyorova, V. Bobrovnikov, Sergey Kononov, D. Kyshtymov, Dmitry Maksimov, Georgiy Razuvaev, A. Sukharev, Korneliy Todyshev, Vitaliy Vorobyev, Anastasiia Zhadan, D. Zhadan","doi":"10.1007/s41781-023-00108-7","DOIUrl":"https://doi.org/10.1007/s41781-023-00108-7","url":null,"abstract":"","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"15 11","pages":"1-13"},"PeriodicalIF":0.0,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139124787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-01-02DOI: 10.1007/s41781-023-00110-z
William Balunas, Donatella Cavalli, Teng Jian Khoo, Matthew Klein, Peter Loch, Federica Piazza, Caterina Pizio, Silvia Resconi, Douglas Schaefer, Russell Smith, Sarah Williams
Missing transverse momentum is a crucial observable for physics at hadron colliders, being the only constraint on the kinematics of "invisible" objects such as neutrinos and hypothetical dark matter particles. Computing missing transverse momentum at the highest possible precision, particularly in experiments at the energy frontier, can be a challenging procedure due to ambiguities in the distribution of energy and momentum between many reconstructed particle candidates. This paper describes a novel solution for efficiently encoding information required for the computation of missing transverse momentum given arbitrary selection criteria for the constituent reconstructed objects. Pileup suppression using information from both the calorimeter and the inner detector is an integral component of the reconstruction procedure. Energy calibration and systematic variations are naturally supported. Following this strategy, the ATLAS Collaboration has been able to optimise the use of missing transverse momentum in diverse analyses throughout Runs 2 and 3 of the Large Hadron Collider and for future analyses.
{"title":"A Flexible and Efficient Approach to Missing Transverse Momentum Reconstruction.","authors":"William Balunas, Donatella Cavalli, Teng Jian Khoo, Matthew Klein, Peter Loch, Federica Piazza, Caterina Pizio, Silvia Resconi, Douglas Schaefer, Russell Smith, Sarah Williams","doi":"10.1007/s41781-023-00110-z","DOIUrl":"10.1007/s41781-023-00110-z","url":null,"abstract":"<p><p>Missing transverse momentum is a crucial observable for physics at hadron colliders, being the only constraint on the kinematics of \"invisible\" objects such as neutrinos and hypothetical dark matter particles. Computing missing transverse momentum at the highest possible precision, particularly in experiments at the energy frontier, can be a challenging procedure due to ambiguities in the distribution of energy and momentum between many reconstructed particle candidates. This paper describes a novel solution for efficiently encoding information required for the computation of missing transverse momentum given arbitrary selection criteria for the constituent reconstructed objects. Pileup suppression using information from both the calorimeter and the inner detector is an integral component of the reconstruction procedure. Energy calibration and systematic variations are naturally supported. Following this strategy, the ATLAS Collaboration has been able to optimise the use of missing transverse momentum in diverse analyses throughout Runs 2 and 3 of the Large Hadron Collider and for future analyses.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"8 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10761467/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139098887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-02-24DOI: 10.1007/s41781-024-00116-1
Abhijit Mathad, Martina Ferrillo, Sacha Barré, Patrick Koppenburg, Patrick Owen, Gerhard Raven, Eduardo Rodrigues, Nicola Serra
The offline software framework of the LHCb experiment has undergone a significant overhaul to tackle the data processing challenges that will arise in the upcoming Run 3 and Run 4 of the Large Hadron Collider. This paper introduces FunTuple, a novel component developed for offline data processing within the LHCb experiment. This component enables the computation and storage of a diverse range of observables for both reconstructed and simulated events by leveraging on the tools initially developed for the trigger system. This feature is crucial for ensuring consistency between trigger-computed and offline-analysed observables. The component and its tool suite offer users flexibility to customise stored observables, and its reliability is validated through a full-coverage set of rigorous unit tests. This paper comprehensively explores FunTuple's design, interface, interaction with other algorithms, and its role in facilitating offline data processing for the LHCb experiment for the next decade and beyond.
{"title":"FunTuple: A New N-tuple Component for Offline Data Processing at the LHCb Experiment.","authors":"Abhijit Mathad, Martina Ferrillo, Sacha Barré, Patrick Koppenburg, Patrick Owen, Gerhard Raven, Eduardo Rodrigues, Nicola Serra","doi":"10.1007/s41781-024-00116-1","DOIUrl":"10.1007/s41781-024-00116-1","url":null,"abstract":"<p><p>The offline software framework of the LHCb experiment has undergone a significant overhaul to tackle the data processing challenges that will arise in the upcoming Run 3 and Run 4 of the Large Hadron Collider. This paper introduces FunTuple, a novel component developed for offline data processing within the LHCb experiment. This component enables the computation and storage of a diverse range of observables for both reconstructed and simulated events by leveraging on the tools initially developed for the trigger system. This feature is crucial for ensuring consistency between trigger-computed and offline-analysed observables. The component and its tool suite offer users flexibility to customise stored observables, and its reliability is validated through a full-coverage set of rigorous unit tests. This paper comprehensively explores FunTuple's design, interface, interaction with other algorithms, and its role in facilitating offline data processing for the LHCb experiment for the next decade and beyond.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"8 1","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11358189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142112969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}