首页 > 最新文献

2010 17th IEEE-NPSS Real Time Conference最新文献

英文 中文
The MHD control system for the FTU tokamak FTU托卡马克的MHD控制系统
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750477
G. D'Antona, S. Cirant, M. Davoudi
In this paper the architecture of the MHD control system for the Frascati Tokamak Upgrade (FTU) is presented. A set of hardware consisting of FPGA and DSP modules on PXI bus for executing the control and estimation algorithms is proposed. Data communication among the hardware modules in both on-line experiment mode and off-line data acquisition has been described. A model predictive protection system has been developed to monitor the antenna angles in real-time to predict the rest position of the antennas based on the mechanical model of the antennas mechanical and motor drive system in order to prevent any mechanical damage by alarming the control system and stopping the motors.
本文介绍了弗拉斯卡蒂托卡马克升级(FTU) MHD控制系统的体系结构。提出了一套基于PXI总线的FPGA和DSP模块组成的硬件系统来实现控制和估计算法。描述了硬件模块在在线实验模式和离线数据采集模式下的数据通信。基于天线机械和电机驱动系统的力学模型,开发了一种模型预测保护系统,对天线角度进行实时监测,预测天线的休息位置,通过报警控制系统和停止电机来防止任何机械损坏。
{"title":"The MHD control system for the FTU tokamak","authors":"G. D'Antona, S. Cirant, M. Davoudi","doi":"10.1109/RTC.2010.5750477","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750477","url":null,"abstract":"In this paper the architecture of the MHD control system for the Frascati Tokamak Upgrade (FTU) is presented. A set of hardware consisting of FPGA and DSP modules on PXI bus for executing the control and estimation algorithms is proposed. Data communication among the hardware modules in both on-line experiment mode and off-line data acquisition has been described. A model predictive protection system has been developed to monitor the antenna angles in real-time to predict the rest position of the antennas based on the mechanical model of the antennas mechanical and motor drive system in order to prevent any mechanical damage by alarming the control system and stopping the motors.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123565184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Measurement system of light curves from nearby supernova bursts for the Super-Kamiokande experiment 超级神冈实验中邻近超新星爆发的光曲线测量系统
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750385
S. Yamada, Y. Hayato, M. Ikeno, M. Nakahata, S. Nakayama, Y. Obayashi, K. Okumura, M. Shiozawa, T. Uchida, T. Yokozawa
Super-Kamiokande is a ring imaging Cherenkov detector for astro-particle physics that consists of 50 ktons pure water and about 13000 photomultiplier tubes (PMT). As well as measuring atmospheric and solar neutrinos, one of the main purposes of the detector is to detect neutrinos from a supernova burst. In the case of a nearby supernova burst which occurs at a distance of 500 light years, the neutrino event rate in the Super-Kamiokande detector is expected to reach 30 MHz and it becomes a huge load for the current data acquisition (DAQ) system. Therefore we are developing an independent DAQ system as a backup for such a nearby supernova burst. This system will measure and record total number of hits in the detector using the digitized signals from the current front-end electronics, from which we can obtain a time variation of total charge deposited in the detector during the supernova burst period. The specification of the new system and current status of the development will be reported.
超级神冈探测器是一个用于天体粒子物理的环形成像切伦科夫探测器,它由50万吨纯水和约13000个光电倍增管(PMT)组成。除了测量大气和太阳中微子外,该探测器的主要目的之一是探测超新星爆发产生的中微子。如果发生在500光年外的超新星爆发,超级神坎德探测器的中微子事件率预计将达到30兆赫,这将成为当前数据采集(DAQ)系统的巨大负荷。因此,我们正在开发一个独立的DAQ系统,作为附近超新星爆发的备份。该系统将利用来自当前前端电子器件的数字化信号测量并记录探测器内的总撞击次数,由此我们可以得到在超新星爆发期间沉积在探测器内的总电荷的时间变化。报告新系统的规格和目前的发展状况。
{"title":"Measurement system of light curves from nearby supernova bursts for the Super-Kamiokande experiment","authors":"S. Yamada, Y. Hayato, M. Ikeno, M. Nakahata, S. Nakayama, Y. Obayashi, K. Okumura, M. Shiozawa, T. Uchida, T. Yokozawa","doi":"10.1109/RTC.2010.5750385","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750385","url":null,"abstract":"Super-Kamiokande is a ring imaging Cherenkov detector for astro-particle physics that consists of 50 ktons pure water and about 13000 photomultiplier tubes (PMT). As well as measuring atmospheric and solar neutrinos, one of the main purposes of the detector is to detect neutrinos from a supernova burst. In the case of a nearby supernova burst which occurs at a distance of 500 light years, the neutrino event rate in the Super-Kamiokande detector is expected to reach 30 MHz and it becomes a huge load for the current data acquisition (DAQ) system. Therefore we are developing an independent DAQ system as a backup for such a nearby supernova burst. This system will measure and record total number of hits in the detector using the digitized signals from the current front-end electronics, from which we can obtain a time variation of total charge deposited in the detector during the supernova burst period. The specification of the new system and current status of the development will be reported.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125146554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An AdvancedTCA based data concentrator and event building architecture 基于高级tca的数据集中器和事件构建体系结构
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750387
A. Mann, I. Konorov, Florian Goslich, S. Paul
To address the data rate requirements for upcoming experiments in high energy physics, we present a configurable architecture for data concentration and event building, based on the AdvancedTCA and MicroTCA standards. The core component is a µTCA based module which connects a Lattice ECP3 FPGA to up to 8 front panel fiber ports for data input from front-end electronics. In addition, the fiber ports can distribute synchronization clock and configuration information from a central time distribution system. To buffer the incoming data, the module provides up to 2 soDIMM sockets for standard DDR3 memory modules. With different firmware functionality, the buffer module can then interface to a µTCA shelf backplane via e.g. PCI Express. To allow event building for more than 8 input links, 4 buffer modules can be combined on an ATCA carrier card, which connects to the high speed links on the µTCA connector. The connections between the 4 µTCA cards and the ATCA backplane can then be configured dynamically by a passive crosspoint switch on the ATCA carrier card. Thus, multiple event building topologies can be configured on the carrier card and within the full ATCA shelf to adapt to different system sizes and communication patterns.
为了满足即将到来的高能物理实验的数据速率要求,我们提出了一个基于AdvancedTCA和MicroTCA标准的数据集中和事件构建的可配置架构。核心组件是一个基于µTCA的模块,它将Lattice ECP3 FPGA连接到多达8个前面板光纤端口,用于从前端电子设备输入数据。此外,光纤端口还可以分发来自中央时间分配系统的同步时钟和配置信息。为了缓冲传入的数据,该模块为标准DDR3内存模块提供了多达2个soDIMM插槽。通过不同的固件功能,缓冲模块可以通过PCI Express等接口连接到µTCA机箱背板。为了允许超过8个输入链路的事件构建,可以在ATCA载波卡上组合4个缓冲模块,该载波卡连接到µTCA连接器上的高速链路。然后可以通过ATCA载波卡上的无源交叉点开关动态配置4µTCA卡与ATCA背板之间的连接。因此,可以在载波卡上和整个ATCA机箱内配置多个事件构建拓扑,以适应不同的系统大小和通信模式。
{"title":"An AdvancedTCA based data concentrator and event building architecture","authors":"A. Mann, I. Konorov, Florian Goslich, S. Paul","doi":"10.1109/RTC.2010.5750387","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750387","url":null,"abstract":"To address the data rate requirements for upcoming experiments in high energy physics, we present a configurable architecture for data concentration and event building, based on the AdvancedTCA and MicroTCA standards. The core component is a µTCA based module which connects a Lattice ECP3 FPGA to up to 8 front panel fiber ports for data input from front-end electronics. In addition, the fiber ports can distribute synchronization clock and configuration information from a central time distribution system. To buffer the incoming data, the module provides up to 2 soDIMM sockets for standard DDR3 memory modules. With different firmware functionality, the buffer module can then interface to a µTCA shelf backplane via e.g. PCI Express. To allow event building for more than 8 input links, 4 buffer modules can be combined on an ATCA carrier card, which connects to the high speed links on the µTCA connector. The connections between the 4 µTCA cards and the ATCA backplane can then be configured dynamically by a passive crosspoint switch on the ATCA carrier card. Thus, multiple event building topologies can be configured on the carrier card and within the full ATCA shelf to adapt to different system sizes and communication patterns.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117115189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Passive Optical Networks for Timing-Trigger and Control applications in high energy physics experiments 无源光网络在高能物理实验中的定时触发和控制应用
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750353
I. Papakonstantinou, C. Soós, S. Papadopoulos, S. Détraz, C. Sigaud, P. Stejskal, S. Storey, J. Troska, F. Vasey
The present paper discusses recent advances on a Passive Optical Network inspired Timing-Trigger and Control scheme for the upgraded Super Large Hadron Collider. The proposed system targets the replacement of the Timing Trigger and Control system installed in the LHC experiments' counting rooms and more specifically the currently known as TTCex to TTCrx link. The timing PON is implemented with commercially available FPGAs and Ethernet PON transceivers and provides a fixed latency gigabit downlink that can carry level 1 trigger accepts and commands as well as an upstream link for feedback from the front-end electronics.
本文讨论了升级版超大强子对撞机无源光网络激励定时触发和控制方案的最新进展。拟议的系统目标是替换安装在大型强子对撞机实验计数室中的定时触发和控制系统,更具体地说,是目前已知的TTCex到TTCrx链路。定时PON采用商用fpga和以太网PON收发器实现,并提供固定延迟千兆下行链路,可以携带1级触发接收和命令,以及用于前端电子反馈的上游链路。
{"title":"Passive Optical Networks for Timing-Trigger and Control applications in high energy physics experiments","authors":"I. Papakonstantinou, C. Soós, S. Papadopoulos, S. Détraz, C. Sigaud, P. Stejskal, S. Storey, J. Troska, F. Vasey","doi":"10.1109/RTC.2010.5750353","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750353","url":null,"abstract":"The present paper discusses recent advances on a Passive Optical Network inspired Timing-Trigger and Control scheme for the upgraded Super Large Hadron Collider. The proposed system targets the replacement of the Timing Trigger and Control system installed in the LHC experiments' counting rooms and more specifically the currently known as TTCex to TTCrx link. The timing PON is implemented with commercially available FPGAs and Ethernet PON transceivers and provides a fixed latency gigabit downlink that can carry level 1 trigger accepts and commands as well as an upstream link for feedback from the front-end electronics.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129168061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neutron scattering experiment automation with Python 用Python实现中子散射实验自动化
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750475
P. Zolnierczuk, R. Riedel
PyDas is a set of Python modules that are used to integrate various components of the SNS DAS system. It enables customized automation of neutron scattering experiments in a rapid and flexible manner. It provides wxPython GUIs for routine experiments as well as IPython command line scripting. Matplotlib and NumPy are used for data presentation and simple analysis. We present an overview of SNS Data Acquisition System and PyDas architectures and implementation along with the examples of use. We also discuss plans for future development as well as the challenges that have to be met while maintaining PyDas for 20+ different scientific instruments.
PyDas是一组Python模块,用于集成SNS DAS系统的各种组件。它可以快速灵活地实现中子散射实验的定制自动化。它提供了用于常规实验的wxPython gui以及IPython命令行脚本。Matplotlib和NumPy用于数据表示和简单分析。我们概述了SNS数据采集系统和PyDas的架构和实现以及使用示例。我们还讨论了未来的发展计划,以及在为20多种不同的科学仪器维护PyDas时必须遇到的挑战。
{"title":"Neutron scattering experiment automation with Python","authors":"P. Zolnierczuk, R. Riedel","doi":"10.1109/RTC.2010.5750475","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750475","url":null,"abstract":"PyDas is a set of Python modules that are used to integrate various components of the SNS DAS system. It enables customized automation of neutron scattering experiments in a rapid and flexible manner. It provides wxPython GUIs for routine experiments as well as IPython command line scripting. Matplotlib and NumPy are used for data presentation and simple analysis. We present an overview of SNS Data Acquisition System and PyDas architectures and implementation along with the examples of use. We also discuss plans for future development as well as the challenges that have to be met while maintaining PyDas for 20+ different scientific instruments.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125588867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Digital filtering performance in the ATLAS Level-1 Calorimeter Trigger ATLAS 1级量热计触发器的数字滤波性能
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750349
D. Hadley
The ATLAS Level-1 Calorimeter Trigger is a hardware-based system designed to identify high-pT jets, electron/photon and tau candidates, and to measure total and missing ET in the ATLAS Liquid Argon and Tile calorimeters. It is a pipelined processor system, with a new set of inputs being evaluated every 25 ns. The overall trigger decision has a latency budget of ∼2 µs, including all transmission delays. The calorimeter trigger uses about 7200 reduced-granularity analogue signals, which are first digitized at the 40 MHz LHC bunch-crossing frequency, before being passed to a digital Finite Impulse Response (FIR) filter. Due to latency and chip real-estate constraints, only a simple 5-element filter with limited precision can be used. Nevertheless, this filter achieves a significant reduction in noise, along with improving the bunch-crossing assignment and energy resolution for small signals. The context in which digital filters are used for the ATLAS Level-1 Calorimeter Trigger is presented, before describing the methods used to determine the best filter coefficients for each detector element. The performance of these filters is investigated with commissioning data and cross-checks of the calibration with initial beam data from ATLAS are shown.
ATLAS 1级量热计触发器是一个基于硬件的系统,旨在识别高pt射流,电子/光子和tau候选物,并测量ATLAS液氩和瓦量热计中的总ET和缺失ET。它是一个流水线式处理器系统,每隔25ns就会评估一组新的输入。整个触发决策的延迟预算为~ 2µs,包括所有传输延迟。量热计触发器使用约7200个减少粒度的模拟信号,这些信号首先在40 MHz LHC串交叉频率下进行数字化,然后传递给数字有限脉冲响应(FIR)滤波器。由于延迟和芯片空间的限制,只能使用精度有限的简单5元滤波器。尽管如此,该滤波器实现了显著的噪声降低,同时提高了小信号的束交叉分配和能量分辨率。在描述用于确定每个检测器元件的最佳滤波系数的方法之前,介绍了用于ATLAS 1级量热计触发器的数字滤波器的背景。用调试数据对这些滤波器的性能进行了研究,并与ATLAS的初始光束数据进行了交叉检验。
{"title":"Digital filtering performance in the ATLAS Level-1 Calorimeter Trigger","authors":"D. Hadley","doi":"10.1109/RTC.2010.5750349","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750349","url":null,"abstract":"The ATLAS Level-1 Calorimeter Trigger is a hardware-based system designed to identify high-pT jets, electron/photon and tau candidates, and to measure total and missing ET in the ATLAS Liquid Argon and Tile calorimeters. It is a pipelined processor system, with a new set of inputs being evaluated every 25 ns. The overall trigger decision has a latency budget of ∼2 µs, including all transmission delays. The calorimeter trigger uses about 7200 reduced-granularity analogue signals, which are first digitized at the 40 MHz LHC bunch-crossing frequency, before being passed to a digital Finite Impulse Response (FIR) filter. Due to latency and chip real-estate constraints, only a simple 5-element filter with limited precision can be used. Nevertheless, this filter achieves a significant reduction in noise, along with improving the bunch-crossing assignment and energy resolution for small signals. The context in which digital filters are used for the ATLAS Level-1 Calorimeter Trigger is presented, before describing the methods used to determine the best filter coefficients for each detector element. The performance of these filters is investigated with commissioning data and cross-checks of the calibration with initial beam data from ATLAS are shown.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132022584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Performance of the ATLAS Inner Detector trigger algorithms in pp collisions at √s = 900 GeV 在√s = 900 GeV的pp碰撞中ATLAS内探测器触发算法的性能
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750418
I. Christidi
The ATLAS Inner Detector (ID) trigger algorithms ran online during data taking with proton-proton collisions at the Large Hadron Collider (LHC) in December 2009. Preliminary results on the performance of the algorithms in collisions at centre-of-mass energy of 900GeV are presented, including comparisons to the ATLAS offline tracking algorithms and to simulations. The ATLAS trigger performs the online event selection in three stages. The ID information is used in the second and third triggering stages, called Level-2 trigger (L2) and Event Filter (EF) respectively, and collectively the High Level Triggers (HLT). The HLT runs software algorithms in a large farm of commercial CPUs and is designed to reject collision events in real time, keeping the most interesting few in every thousand. The average execution time per event at L2(EF) is about 40ms(4s) and the ID trigger algorithms can take only a fraction of that. Within this time, the data from interesting regions of the ID have to be accessed from central buffers through the network, unpacked, clustered and converted to the ATLAS global coordinates, then pattern recognition follows to identify the trajectories of charged particles (tracks), and finally these tracks are used in combination with other information to accept or reject events, according to whether they satisfy one or more trigger signatures. The various clients of the ID trigger information impose different constraints in the performance of the pattern recognition, in terms of efficiency and fake rate for tracks. An overview of the different uses of the ID trigger algorithms is given, and their online performance is exemplified with results from the use of L2 tracks for the online determination of the LHC beam position.
2009年12月,ATLAS内部探测器(ID)触发算法在大型强子对撞机(LHC)的质子-质子碰撞数据采集过程中在线运行。给出了该算法在质量中心能量为900GeV的碰撞中性能的初步结果,包括与ATLAS脱机跟踪算法的比较和仿真结果。ATLAS触发器分三个阶段执行在线事件选择。ID信息用于第二和第三触发阶段,分别称为二级触发器(L2)和事件过滤器(EF),并统称为高级触发器(HLT)。HLT在大型商用cpu中运行软件算法,旨在实时拒绝碰撞事件,每千次中保留最有趣的少数事件。L2(EF)上每个事件的平均执行时间约为40ms(4s), ID触发算法只能占用其中的一小部分。在这段时间内,来自ID感兴趣区域的数据必须通过网络从中央缓冲区访问,解包,聚类并转换为ATLAS全局坐标,然后进行模式识别以识别带电粒子的轨迹(轨迹),最后将这些轨迹与其他信息结合使用,根据它们是否满足一个或多个触发签名来接受或拒绝事件。ID触发信息的各种客户端在模式识别的性能方面施加了不同的约束,包括轨迹的效率和假率。概述了ID触发算法的不同用途,并通过使用L2轨道在线确定LHC光束位置的结果举例说明了它们的在线性能。
{"title":"Performance of the ATLAS Inner Detector trigger algorithms in pp collisions at √s = 900 GeV","authors":"I. Christidi","doi":"10.1109/RTC.2010.5750418","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750418","url":null,"abstract":"The ATLAS Inner Detector (ID) trigger algorithms ran online during data taking with proton-proton collisions at the Large Hadron Collider (LHC) in December 2009. Preliminary results on the performance of the algorithms in collisions at centre-of-mass energy of 900GeV are presented, including comparisons to the ATLAS offline tracking algorithms and to simulations. The ATLAS trigger performs the online event selection in three stages. The ID information is used in the second and third triggering stages, called Level-2 trigger (L2) and Event Filter (EF) respectively, and collectively the High Level Triggers (HLT). The HLT runs software algorithms in a large farm of commercial CPUs and is designed to reject collision events in real time, keeping the most interesting few in every thousand. The average execution time per event at L2(EF) is about 40ms(4s) and the ID trigger algorithms can take only a fraction of that. Within this time, the data from interesting regions of the ID have to be accessed from central buffers through the network, unpacked, clustered and converted to the ATLAS global coordinates, then pattern recognition follows to identify the trajectories of charged particles (tracks), and finally these tracks are used in combination with other information to accept or reject events, according to whether they satisfy one or more trigger signatures. The various clients of the ID trigger information impose different constraints in the performance of the pattern recognition, in terms of efficiency and fake rate for tracks. An overview of the different uses of the ID trigger algorithms is given, and their online performance is exemplified with results from the use of L2 tracks for the online determination of the LHC beam position.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123836782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Time-critical database conditions data-handling for the CMS experiment CMS实验的时间关键数据库条件数据处理
Pub Date : 2010-05-24 DOI: 10.1109/TNS.2011.2155084
M. de Gruttola, S. Di Guida, V. Innocente, A. Pierro
Automatic, synchronous and of course reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to automatize the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicate service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements done so far. The experience of this first years of operation will be discussed in detail.
条件数据库的自动、同步和当然可靠的填充对于在线选择的正确操作以及离线数据重建和分析至关重要。我们将在这里描述CMS实验中安装的系统,该系统用于自动化流程,以集中填充数据库,并使状态数据及时在线用于高级触发,离线用于重建。数据由用户在一个专门的服务中“删除”,该服务对数据进行同步并负责将数据写入在线数据库。然后它们自动流到离线数据库,因此可以立即在全球离线访问。该机制在2008年和2009年的宇宙射线挑战和第一次大型强子对撞机数据的运行中被大量使用,到目前为止已经进行了许多改进。将详细讨论这一行动头几年的经验。
{"title":"Time-critical database conditions data-handling for the CMS experiment","authors":"M. de Gruttola, S. Di Guida, V. Innocente, A. Pierro","doi":"10.1109/TNS.2011.2155084","DOIUrl":"https://doi.org/10.1109/TNS.2011.2155084","url":null,"abstract":"Automatic, synchronous and of course reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to automatize the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicate service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements done so far. The experience of this first years of operation will be discussed in detail.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127810718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of the ATLAS first-level trigger with first LHC data 基于首个LHC数据的ATLAS一级触发器的性能
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750348
J. Lundberg
ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). Its trigger system must reduce the anticipated proton collision rate of up to 40 MHz to a recordable event rate of 100–200 Hz. This is realized through a multi-level trigger system. The first-level trigger is implemented with custom-built electronics and makes an initial selection which reduces the rate to less than 100 kHz. The subsequent trigger selection is done in software run on PC farms. The first-level trigger decision is made by the central-trigger processor using information from coarse grained calorimeter information, dedicated muon-trigger detectors, and a variety of additional trigger inputs from detectors in the forward regions. We present the performance of the first-level trigger during the commissioning of the ATLAS detector during early LHC running. We cover the trigger strategies used during the different machine commissioning phases from first circulating beams and splash events to collisions. It is described how the very first proton events were successfully triggered using signals from scintillator trigger detectors in the forward region. For circulating and colliding beams electrostatic button pick-up detectors were used to clock the arriving proton bunches. These signals were immediately used to aid the timing in of the beams and the ATLAS detector. We describe the performance and timing in of the the first-level Calorimeter and muon trigger systems. The operation of the trigger relies on its real-time monitoring capabilities. We describe how trigger rates, timing information, and dead-time fractions were monitored to ensure the very good performance of the system.
ATLAS是大型强子对撞机(LHC)的两个通用探测器之一。它的触发系统必须将预期的高达40兆赫的质子碰撞率降低到100-200赫兹的可记录事件率。这是通过多级触发系统实现的。第一级触发器由定制的电子器件实现,并进行初始选择,将速率降低到100 kHz以下。随后的触发器选择是在PC农场上运行的软件中完成的。一级触发决策由中央触发处理器使用来自粗粒度量热计信息、专用介子触发探测器和来自前向区域探测器的各种附加触发输入的信息做出。本文介绍了大型强子对撞机运行初期ATLAS探测器调试过程中一级触发的性能。我们涵盖了从第一循环光束和飞溅事件到碰撞的不同机器调试阶段使用的触发策略。它描述了如何成功地利用闪烁体触发探测器在前方区域的信号触发第一个质子事件。对于循环束和对撞束,使用静电按钮拾取探测器对到达的质子束进行计时。这些信号被立即用于辅助光束和ATLAS探测器的定时。我们描述了一级量热计和介子触发系统的性能和时序。触发器的操作依赖于其实时监控能力。我们描述了如何监控触发率、定时信息和死区时间分数,以确保系统的良好性能。
{"title":"Performance of the ATLAS first-level trigger with first LHC data","authors":"J. Lundberg","doi":"10.1109/RTC.2010.5750348","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750348","url":null,"abstract":"ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). Its trigger system must reduce the anticipated proton collision rate of up to 40 MHz to a recordable event rate of 100–200 Hz. This is realized through a multi-level trigger system. The first-level trigger is implemented with custom-built electronics and makes an initial selection which reduces the rate to less than 100 kHz. The subsequent trigger selection is done in software run on PC farms. The first-level trigger decision is made by the central-trigger processor using information from coarse grained calorimeter information, dedicated muon-trigger detectors, and a variety of additional trigger inputs from detectors in the forward regions. We present the performance of the first-level trigger during the commissioning of the ATLAS detector during early LHC running. We cover the trigger strategies used during the different machine commissioning phases from first circulating beams and splash events to collisions. It is described how the very first proton events were successfully triggered using signals from scintillator trigger detectors in the forward region. For circulating and colliding beams electrostatic button pick-up detectors were used to clock the arriving proton bunches. These signals were immediately used to aid the timing in of the beams and the ATLAS detector. We describe the performance and timing in of the the first-level Calorimeter and muon trigger systems. The operation of the trigger relies on its real-time monitoring capabilities. We describe how trigger rates, timing information, and dead-time fractions were monitored to ensure the very good performance of the system.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125787843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Online digital data processing for the T2K Fine Grained Detector T2K细粒度检测器的在线数字数据处理
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750338
P. Amaudruz, D. Bishop, N. Braam, C. Gutjahr, D. Karlen, R. Hasanen, R. Henderson, N. Honkanen, B. Kirby, T. Lindner, A. Miller, K. Mizouchi, C. Ohlmann, K. Olchanski, S. Oser, C. Pearson, P. Poffenberger, R. Poutissou, F. Retire, H. Tanaka, J. Zalipska
The T2K Fine Grained Detector is an active neutrino target that uses segmented scintillator bars to observe short-range particle tracks. 8448 multi-pixel photon counters coupled to wavelength shifting fibres detect scintillator light. An application specific integrated circuit shapes the MPPC waveform and uses a switched capacitor array to store up to 511 analog samples over 10.24µs. High and low attenuation channels for each MPPC improve dynamic range. 12-bit serial quad-ADCs digitize ASIC analog output and interface with a field programmable gate array, while each FPGA simultaneously reads out four ADCs and saves the synchronized samples in an external digital memory. The system produces 13.5 MB of uncompressed data per acquisition with a target trigger rate of 20 Hz, and requires zero suppression to reduce data size and readout time. Firmware based data compression uses an online pulse-finder that decides whether to output pulse height information, a section of waveform, or to suppress all data. The front end FPGA transfers formatted data to collector cards through a 2 Gb/s optical fiber interface using an efficient custom protocol. We have evaluated the performance of the FGD electronics system and the quality of its online data compression through the course of a physics data run.
T2K细粒探测器是一个活跃的中微子目标,它使用分段闪烁棒来观察短程粒子轨迹。8448个多像素光子计数器耦合到波长移动光纤检测闪烁光。特定应用的集成电路形成MPPC波形,并使用开关电容阵列在10.24µs内存储多达511个模拟样本。每个MPPC的高衰减通道和低衰减通道提高了动态范围。12位串行四adc数字化ASIC模拟输出并与现场可编程门阵列接口,而每个FPGA同时读取四个adc并将同步采样保存在外部数字存储器中。该系统每次采集产生13.5 MB未压缩数据,目标触发率为20 Hz,并且需要零抑制以减少数据大小和读出时间。基于固件的数据压缩使用在线脉冲检测器来决定是否输出脉冲高度信息、波形的一部分或抑制所有数据。前端FPGA采用高效的自定义协议,通过2gb /s光纤接口将格式化的数据传输到采集卡。通过一次物理数据运行,对FGD电子系统的性能和在线数据压缩质量进行了评价。
{"title":"Online digital data processing for the T2K Fine Grained Detector","authors":"P. Amaudruz, D. Bishop, N. Braam, C. Gutjahr, D. Karlen, R. Hasanen, R. Henderson, N. Honkanen, B. Kirby, T. Lindner, A. Miller, K. Mizouchi, C. Ohlmann, K. Olchanski, S. Oser, C. Pearson, P. Poffenberger, R. Poutissou, F. Retire, H. Tanaka, J. Zalipska","doi":"10.1109/RTC.2010.5750338","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750338","url":null,"abstract":"The T2K Fine Grained Detector is an active neutrino target that uses segmented scintillator bars to observe short-range particle tracks. 8448 multi-pixel photon counters coupled to wavelength shifting fibres detect scintillator light. An application specific integrated circuit shapes the MPPC waveform and uses a switched capacitor array to store up to 511 analog samples over 10.24µs. High and low attenuation channels for each MPPC improve dynamic range. 12-bit serial quad-ADCs digitize ASIC analog output and interface with a field programmable gate array, while each FPGA simultaneously reads out four ADCs and saves the synchronized samples in an external digital memory. The system produces 13.5 MB of uncompressed data per acquisition with a target trigger rate of 20 Hz, and requires zero suppression to reduce data size and readout time. Firmware based data compression uses an online pulse-finder that decides whether to output pulse height information, a section of waveform, or to suppress all data. The front end FPGA transfers formatted data to collector cards through a 2 Gb/s optical fiber interface using an efficient custom protocol. We have evaluated the performance of the FGD electronics system and the quality of its online data compression through the course of a physics data run.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115373200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2010 17th IEEE-NPSS Real Time Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1