首页 > 最新文献

2010 17th IEEE-NPSS Real Time Conference最新文献

英文 中文
High rate packet transmission via IP-over-InfiniBand using commodity hardware 使用商用硬件,通过IP-over-InfiniBand进行高速数据包传输
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750409
D. Bortolotti, A. Carbone, D. Galli, I. Lax, U. Marconi, G. Peco, S. Perazzini, V. Vagnoni, M. Zangoli
Amongst link technologies, InfiniBand has gained wide acceptance in the framework of High Performance Computing (HPC), due to its high bandwidth and in particular to its low latency. Since InfiniBand is very flexible, supporting several kinds of messages, it is suitable, in principle, not only for HPC, but also for the data acquisition systems of High Energy Physics (HEP) Experiments.
在链路技术中,InfiniBand由于其高带宽,特别是低延迟,在高性能计算(HPC)框架中得到了广泛的接受。由于InfiniBand非常灵活,支持多种类型的消息,原则上它不仅适用于高性能计算,也适用于高能物理(HEP)实验的数据采集系统。
{"title":"High rate packet transmission via IP-over-InfiniBand using commodity hardware","authors":"D. Bortolotti, A. Carbone, D. Galli, I. Lax, U. Marconi, G. Peco, S. Perazzini, V. Vagnoni, M. Zangoli","doi":"10.1109/RTC.2010.5750409","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750409","url":null,"abstract":"Amongst link technologies, InfiniBand has gained wide acceptance in the framework of High Performance Computing (HPC), due to its high bandwidth and in particular to its low latency. Since InfiniBand is very flexible, supporting several kinds of messages, it is suitable, in principle, not only for HPC, but also for the data acquisition systems of High Energy Physics (HEP) Experiments.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130900984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Triggers, data flow and the synchronization between the Auger surface detector and the AMIGA underground muon counters 触发器,数据流和俄歇表面探测器和AMIGA地下μ子计数器之间的同步
Pub Date : 2010-05-24 DOI: 10.1109/TNS.2011.2142194
Z. Szadkowski
The aim of the AMIGA project (Auger Muons and Infill for the Ground Array) is an investigation of Extensive Air Showers at energies lower than by standard Auger array, where the transition from galactic to extragalactic sources is expected. The Auger array is enlarged by a relatively small dedicated area of surface detectors with nearby buried underground muon counters at half or less the standard 1.5 km grid. Lowering the Auger energy threshold by more than one order of magnitude allows a precise measurement of the cosmic ray spectrum in the very interesting regions of the second knee and the ankle. The paper describes the working principle of the Master/Slave (standard Auger surface detector/the underground muon counters) synchronous data acquisition, general triggering and the extraction of data corresponding to the real events from underground storage buffers applied in two prototypes: A) with 12.5 ns resolution (80 MHz) built from 4 segments: standard Auger Front End Board (FEB) and Surface Single Board Computer (SSBC) (on the surface) and the Digital Board with the FPGA and the Microcontroller Board (underground), B) with 4-times higher: 3.125 ns resolution (320 MHz) built with two segments only: new surface Front End Board supported by the NIOS® processor and CycloneIII™ Starter Kit board underground, working also with NIOS® virtual processor, which replaces the external TI µC, which in meantime became obsolete. The system with the NIOS® processors can remotely modify and update: the AHDL firmware creating the hardware FPGA net structure responsible for the fast DAQ, the internal structure of the NIOS® (resources and peripherals) and the NIOS® firmware (C code) responsible for software data management. With the standard µC, the µC firmware was fixed and could not be updated remotely. The 80 MHz prototype passed laboratory tests with real scintillators. The 320 MHz prototype (still being optimized) is considered as the ultimate AMIGA design.
AMIGA项目(俄歇μ子和地面阵列的填充)的目标是在能量低于标准俄歇阵列的情况下对广泛的空气阵雨进行调查,在那里,从银河系到星系外的源的过渡有望实现。俄歇阵列被一个相对较小的地面探测器专用区域扩大,附近埋在地下的μ子计数器在标准1.5公里网格的一半或更少。将俄歇能量阈值降低一个数量级以上,可以精确测量第二个膝盖和脚踝非常有趣的区域的宇宙射线光谱。本文介绍了主/从(标准俄采表面探测器/地下μ子计数器)同步数据采集、一般触发和从地下存储缓冲区中提取与真实事件相对应的数据的工作原理,应用于两种样机:A) 12.5 ns分辨率(80 MHz),由4段组成:标准的Auger前端板(FEB)和Surface单板计算机(SSBC)(在表面上)以及带有FPGA和微控制器板(地下)的数字板(B),具有4倍高:3.125 ns分辨率(320 MHz),仅由两个部分构建:新的表面前端板由NIOS®处理器和CycloneIII™地下Starter Kit板支持,还与NIOS®虚拟处理器一起工作,取代了外部TIµC,同时也过时了。采用NIOS®处理器的系统可以远程修改和更新:创建硬件FPGA网络结构的AHDL固件负责快速DAQ, NIOS®的内部结构(资源和外设)和NIOS®固件(C代码)负责软件数据管理。使用标准的µC,µC固件是固定的,无法远程更新。80兆赫的原型机通过了真实闪烁体的实验室测试。320mhz的原型(仍在优化中)被认为是最终的AMIGA设计。
{"title":"Triggers, data flow and the synchronization between the Auger surface detector and the AMIGA underground muon counters","authors":"Z. Szadkowski","doi":"10.1109/TNS.2011.2142194","DOIUrl":"https://doi.org/10.1109/TNS.2011.2142194","url":null,"abstract":"The aim of the AMIGA project (Auger Muons and Infill for the Ground Array) is an investigation of Extensive Air Showers at energies lower than by standard Auger array, where the transition from galactic to extragalactic sources is expected. The Auger array is enlarged by a relatively small dedicated area of surface detectors with nearby buried underground muon counters at half or less the standard 1.5 km grid. Lowering the Auger energy threshold by more than one order of magnitude allows a precise measurement of the cosmic ray spectrum in the very interesting regions of the second knee and the ankle. The paper describes the working principle of the Master/Slave (standard Auger surface detector/the underground muon counters) synchronous data acquisition, general triggering and the extraction of data corresponding to the real events from underground storage buffers applied in two prototypes: A) with 12.5 ns resolution (80 MHz) built from 4 segments: standard Auger Front End Board (FEB) and Surface Single Board Computer (SSBC) (on the surface) and the Digital Board with the FPGA and the Microcontroller Board (underground), B) with 4-times higher: 3.125 ns resolution (320 MHz) built with two segments only: new surface Front End Board supported by the NIOS® processor and CycloneIII™ Starter Kit board underground, working also with NIOS® virtual processor, which replaces the external TI µC, which in meantime became obsolete. The system with the NIOS® processors can remotely modify and update: the AHDL firmware creating the hardware FPGA net structure responsible for the fast DAQ, the internal structure of the NIOS® (resources and peripherals) and the NIOS® firmware (C code) responsible for software data management. With the standard µC, the µC firmware was fixed and could not be updated remotely. The 80 MHz prototype passed laboratory tests with real scintillators. The 320 MHz prototype (still being optimized) is considered as the ultimate AMIGA design.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129246362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Upgrades for the PHENIX data acquisition system PHENIX数据采集系统的升级
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750356
M. Purschke
PHENIX [1] is one of two large experiments at Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC). At the time of this conference, the Run 10 of RHIC is in progress and has generated about a PetaByte of raw data. The following summer shutdown marks the begin of the installation of the PHENIX upgrade detectors, the first of which will be commissioned for the upcoming Run 11. In order to accommodate the new detectors in the PHENIX data acquisition, we will start to implement significant changes to the system, such as the switch to a new generation of readout electronics, and the move to 10 Gigabit Ethernet for the components with the highest data volume. Once fully installed, the new detectors will about triple the current maximum data rate from about 600MB/s to 1.8 GB/s.
PHENIX[1]是布鲁克海文国家实验室相对论重离子对撞机(RHIC)的两个大型实验之一。在本次会议期间,RHIC的Run 10正在进行中,并已生成了大约1 pb字节的原始数据。接下来的夏季停机标志着PHENIX升级探测器的安装开始,其中第一个将为即将到来的Run 11进行调试。为了适应PHENIX数据采集中的新探测器,我们将开始对系统进行重大更改,例如切换到新一代读出电子设备,以及为具有最高数据量的组件迁移到10千兆以太网。一旦完全安装,新的探测器将把目前最大数据速率从600MB/s提高到1.8 GB/s,大约是原来的三倍。
{"title":"Upgrades for the PHENIX data acquisition system","authors":"M. Purschke","doi":"10.1109/RTC.2010.5750356","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750356","url":null,"abstract":"PHENIX [1] is one of two large experiments at Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC). At the time of this conference, the Run 10 of RHIC is in progress and has generated about a PetaByte of raw data. The following summer shutdown marks the begin of the installation of the PHENIX upgrade detectors, the first of which will be commissioned for the upcoming Run 11. In order to accommodate the new detectors in the PHENIX data acquisition, we will start to implement significant changes to the system, such as the switch to a new generation of readout electronics, and the move to 10 Gigabit Ethernet for the components with the highest data volume. Once fully installed, the new detectors will about triple the current maximum data rate from about 600MB/s to 1.8 GB/s.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125448175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an optical link card for the upgrade phase II of TileCal experiment TileCal实验二期升级用光链路卡的研制
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750449
F. Carrió, V. Castillo, A. Ferrer, V. González, E. Higón, C. Marin, P. Moreno, E. Sanchis, C. Solans, A. Valero, J. Valls
This work presents the design of an optical link card developed in the frame of the R&D activities for the phase 2 upgrade of the TileCal experiment as part of the evaluation of different technologies for the final choice in the next two years. The board is designed as a mezzanine which can work independently or plugged in the Optical Multiplexer Board of the TileCal backend electronics. It includes two SNAP 12 optical connectors able to transmit and receive up to 75 Gbps and one SFP optical connector for lower speeds and compatibility with existing hardware as the Read Out Driver. All processing is done in a Stratix II GX FPGA. Details are given on the hardware design including signal and power integrity analysis needed when working with such high data rates and also on firmware development to get the best performance of the FPGA signal transceivers and for the use of a soft core processor to act as controller of the system.
这项工作介绍了在TileCal实验第二阶段升级的研发活动框架内开发的光链路卡的设计,作为未来两年内最终选择的不同技术评估的一部分。该板被设计成一个夹层,可以独立工作,也可以插入到TileCal后端电子器件的光复用板中。它包括两个能够发送和接收高达75gbps的SNAP 12光连接器和一个SFP光连接器,用于较低的速度,并与现有硬件兼容,作为读出驱动程序。所有的处理都在Stratix II GX FPGA中完成。详细介绍了硬件设计,包括处理如此高的数据速率所需的信号和电源完整性分析,以及固件开发,以获得FPGA信号收发器的最佳性能,并使用软核处理器作为系统的控制器。
{"title":"Development of an optical link card for the upgrade phase II of TileCal experiment","authors":"F. Carrió, V. Castillo, A. Ferrer, V. González, E. Higón, C. Marin, P. Moreno, E. Sanchis, C. Solans, A. Valero, J. Valls","doi":"10.1109/RTC.2010.5750449","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750449","url":null,"abstract":"This work presents the design of an optical link card developed in the frame of the R&D activities for the phase 2 upgrade of the TileCal experiment as part of the evaluation of different technologies for the final choice in the next two years. The board is designed as a mezzanine which can work independently or plugged in the Optical Multiplexer Board of the TileCal backend electronics. It includes two SNAP 12 optical connectors able to transmit and receive up to 75 Gbps and one SFP optical connector for lower speeds and compatibility with existing hardware as the Read Out Driver. All processing is done in a Stratix II GX FPGA. Details are given on the hardware design including signal and power integrity analysis needed when working with such high data rates and also on firmware development to get the best performance of the FPGA signal transceivers and for the use of a soft core processor to act as controller of the system.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122201130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commissioning of the ATLAS High Level Trigger with proton collisions at the LHC 在大型强子对撞机上进行质子碰撞的ATLAS高水平触发器的调试
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750350
B. Petersen
ATLAS is one of two general-purpose detectors at the Large Hadron Collider (LHC). The ATLAS trigger system uses fast reconstruction algorithms to efficiently reject a large rate of background events and still select potentially interesting signal events with good efficiency. After a first processing level (Level 1) using custom electronics, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events.
ATLAS是大型强子对撞机(LHC)的两个通用探测器之一。ATLAS触发系统采用快速重构算法,可以有效地拒绝大量背景事件,同时仍然有效地选择潜在的感兴趣的信号事件。在使用定制电子设备的第一处理级别(第1级)之后,触发选择由在两个处理器群上运行的软件进行,总共包含大约2000个多核机器。这个系统被称为高电平触发(HLT)。为了将网络数据流量和处理时间减少到可管理的水平,HLT使用种子式、分步重建,旨在尽早拒绝后台事件。
{"title":"Commissioning of the ATLAS High Level Trigger with proton collisions at the LHC","authors":"B. Petersen","doi":"10.1109/RTC.2010.5750350","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750350","url":null,"abstract":"ATLAS is one of two general-purpose detectors at the Large Hadron Collider (LHC). The ATLAS trigger system uses fast reconstruction algorithms to efficiently reject a large rate of background events and still select potentially interesting signal events with good efficiency. After a first processing level (Level 1) using custom electronics, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"14 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114126204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hard Real-Time wireless communication in the northern Pierre Auger Observatory Pierre Auger天文台北部的硬实时无线通信
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750355
R. Kieckhafer
The Pierre Auger Cosmic Ray Observatory employs a large array of Surface Detector stations to detect the secondary particle showers generated by the arrivals of Ultra High Energy Cosmic Rays. The operational Auger South site uses a tower-based wireless network for communication between the stations and observatory campus. Plans for a larger Auger North array call for a similar system. However, a variety of factors have rendered direct station-to-tower routing infeasible in Auger North. Thus, it will employ a new paradigm, the Wireless Architecture for Hard Real-Time Embedded Networks (WAHREN) designed specifically for highly reliable message delivery over a fixed network, under hard real-time deadlines. This paper describes the WAHREN topology and protocols, as well as real-time performance evaluation, formal verification, testbed operation, and Markov reliability modeling. The status of system hardware development and an on-site Research and Development Array are also discussed.
皮埃尔·奥格宇宙射线天文台使用大量的地面探测器来探测超高能量宇宙射线到达时产生的二次粒子阵雨。运行中的俄歇南站点使用基于塔的无线网络在站点和天文台校园之间进行通信。一个更大的俄歇北阵列的计划需要一个类似的系统。然而,各种因素使得直接站塔路由在俄歇北部不可行。因此,它将采用一种新的范例,即硬实时嵌入式网络无线架构(WAHREN),专为在硬实时截止日期下通过固定网络进行高可靠的消息传递而设计。本文描述了WAHREN的拓扑结构和协议,以及实时性能评估、形式化验证、试验台运行和马尔可夫可靠性建模。讨论了系统硬件开发和现场研发阵列的现状。
{"title":"Hard Real-Time wireless communication in the northern Pierre Auger Observatory","authors":"R. Kieckhafer","doi":"10.1109/RTC.2010.5750355","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750355","url":null,"abstract":"The Pierre Auger Cosmic Ray Observatory employs a large array of Surface Detector stations to detect the secondary particle showers generated by the arrivals of Ultra High Energy Cosmic Rays. The operational Auger South site uses a tower-based wireless network for communication between the stations and observatory campus. Plans for a larger Auger North array call for a similar system. However, a variety of factors have rendered direct station-to-tower routing infeasible in Auger North. Thus, it will employ a new paradigm, the Wireless Architecture for Hard Real-Time Embedded Networks (WAHREN) designed specifically for highly reliable message delivery over a fixed network, under hard real-time deadlines. This paper describes the WAHREN topology and protocols, as well as real-time performance evaluation, formal verification, testbed operation, and Markov reliability modeling. The status of system hardware development and an on-site Research and Development Array are also discussed.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115861619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
DAQ architecture design of Daya Bay Reactor Neutrino Experiment 大亚湾反应堆中微子实验DAQ体系结构设计
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750404
Fei Li, X. Ji, Xiao-nan Li, K. Zhu
The main task of the data acquisition (DAQ) system in Daya Bay Reactor Neutrino Experiment is to record antineutrino candidate events and other background events. There are seventeen detectors in three sites. Each detector will have a separate VME readout crate that contains the trigger and DAQ electronics modules. The DAQ system reads event data from front end electronics modules, concatenates data fragments of the modules and packs them to a subsystem event, then transmits to the backend system to do data stream merging, monitoring and recording.
大亚湾反应堆中微子实验数据采集(DAQ)系统的主要任务是记录反中微子候选事件和其他背景事件。在三个地点有17个探测器。每个探测器将有一个单独的VME读出板条箱,其中包含触发和DAQ电子模块。DAQ系统从前端电子模块读取事件数据,将各模块的数据片段拼接成子系统事件,再传送到后端系统进行数据流合并、监控和记录。
{"title":"DAQ architecture design of Daya Bay Reactor Neutrino Experiment","authors":"Fei Li, X. Ji, Xiao-nan Li, K. Zhu","doi":"10.1109/RTC.2010.5750404","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750404","url":null,"abstract":"The main task of the data acquisition (DAQ) system in Daya Bay Reactor Neutrino Experiment is to record antineutrino candidate events and other background events. There are seventeen detectors in three sites. Each detector will have a separate VME readout crate that contains the trigger and DAQ electronics modules. The DAQ system reads event data from front end electronics modules, concatenates data fragments of the modules and packs them to a subsystem event, then transmits to the backend system to do data stream merging, monitoring and recording.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122594757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Commissioning of the ATLAS High Level muon trigger with beam collisions 具有光束碰撞的ATLAS高水平介子触发器的调试
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750408
M. Owen
The ATLAS experiment is a multipurpose experiment at the Large Hadron Collider (LHC) designed to study the interactions of the fundamental particles. The interaction rate of the LHC is such that a three level trigger system is needed to select, in real time, the interesting events to be recorded by AvTLAS. The LHC has recently provided the first pp collisions at √s = 7 TeV and the first data are used to study the performance of the ATLAS High Level muon trigger. Good performance of the algorithms is observed.
ATLAS实验是在大型强子对撞机(LHC)上进行的一项多用途实验,旨在研究基本粒子的相互作用。大型强子对撞机的相互作用速率如此之高,以至于需要一个三级触发系统来实时选择AvTLAS记录的有趣事件。LHC最近提供了第一次在√s = 7 TeV下的pp碰撞,并将第一批数据用于研究ATLAS高水平介子触发器的性能。实验结果表明,该算法具有良好的性能。
{"title":"Commissioning of the ATLAS High Level muon trigger with beam collisions","authors":"M. Owen","doi":"10.1109/RTC.2010.5750408","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750408","url":null,"abstract":"The ATLAS experiment is a multipurpose experiment at the Large Hadron Collider (LHC) designed to study the interactions of the fundamental particles. The interaction rate of the LHC is such that a three level trigger system is needed to select, in real time, the interesting events to be recorded by AvTLAS. The LHC has recently provided the first pp collisions at √s = 7 TeV and the first data are used to study the performance of the ATLAS High Level muon trigger. Good performance of the algorithms is observed.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"269 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132242242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Network resiliency implementation in the ATLAS TDAQ system ATLAS TDAQ系统的网络弹性实现
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750373
S. Stancu, A. Al-Shabibi, S. Batraneanu, S. Ballestrero, C. Caramarcu, B. Martin, D. Savu, R. Sjoen, L. Valsan
The ATLAS TDAQ (Trigger and Data Acquisition) system performs the real-time selection of events produced by the detector. For this purpose approximately 2000 computers are deployed and interconnected through various high speed networks, whose architecture has already been described. This article focuses on the implementation and validation of network connectivity resiliency (previously presented at a conceptual level). Redundancy and eventually load balancing are achieved through the synergy of various protocols: link aggregation, OSPF (Open Shortest Path First), VRRP (Virtual Router Redundancy Protocol), MST (Multiple Spanning Trees). An innovative method for cost-effective redundant connectivity of high-throughput high-availability servers is presented. Furthermore, real-life examples showing how redundancy works, and more importantly how it might fail despite careful planning are presented.
ATLAS TDAQ(触发和数据采集)系统对探测器产生的事件进行实时选择。为此,部署了大约2000台计算机,并通过各种高速网络相互连接,其架构已经描述过。本文主要关注网络连接弹性的实现和验证(之前在概念级别上介绍过)。通过链路聚合、OSPF(开放最短路径优先)、VRRP(虚拟路由器冗余协议)、MST(多生成树)等多种协议的协同作用,实现冗余并最终实现负载均衡。提出了一种具有成本效益的高吞吐量高可用服务器冗余连接的创新方法。此外,现实生活中的例子展示了冗余是如何工作的,更重要的是,尽管有仔细的计划,它可能会失败。
{"title":"Network resiliency implementation in the ATLAS TDAQ system","authors":"S. Stancu, A. Al-Shabibi, S. Batraneanu, S. Ballestrero, C. Caramarcu, B. Martin, D. Savu, R. Sjoen, L. Valsan","doi":"10.1109/RTC.2010.5750373","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750373","url":null,"abstract":"The ATLAS TDAQ (Trigger and Data Acquisition) system performs the real-time selection of events produced by the detector. For this purpose approximately 2000 computers are deployed and interconnected through various high speed networks, whose architecture has already been described. This article focuses on the implementation and validation of network connectivity resiliency (previously presented at a conceptual level). Redundancy and eventually load balancing are achieved through the synergy of various protocols: link aggregation, OSPF (Open Shortest Path First), VRRP (Virtual Router Redundancy Protocol), MST (Multiple Spanning Trees). An innovative method for cost-effective redundant connectivity of high-throughput high-availability servers is presented. Furthermore, real-life examples showing how redundancy works, and more importantly how it might fail despite careful planning are presented.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"56 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134187850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time configuration changes of the ATLAS High Level Trigger ATLAS高电平触发器的实时配置变化
Pub Date : 2010-05-24 DOI: 10.1109/RTC.2010.5750407
F. Winklmeier
The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 processing nodes and will be extended incrementally following the expected increase in luminosity of the LHC to about 2300 nodes. The event selection within the HLT applications is carried out by specialized reconstruction algorithms. The selection can be controlled via properties that are stored in a central database and are retrieved at the startup of the HLT processes, which then usually run continuously for many hours. To be able to respond to changes in the LHC beam conditions, it is essential that the algorithms can be re-configured without disrupting data taking while ensuring a consistent and reproducible configuration across the entire HLT farm. The techniques developed to allow these real-time configuration changes will be exemplified on the basis of two applications: trigger prescales and beamspot measurement. The prescale value determines the fraction of events an HLT algorithm is being executed on, including when it is deactivated. This feature is both essential during the commissioning phase of the HLT as well as for adjusting the mixture of recorded physics events during an LHC run. The primary event vertex distribution, from which the beam spot position and size can be extracted, is measured by a dedicated HLT algorithm on each node and periodically aggregated across the HLT farm and its parameters are published and stored in the conditions database. The result can be fed back to the HLT algorithms to maintain selection efficiency and rejections rates. Finally, the technologies employed to allow the simultaneous database access of thousands of applications in an online environment will be shown.
ATLAS高水平触发(HLT)是一种分布式实时软件系统,用于对大型强子对撞机(LHC)质子-质子碰撞过程中产生的事件进行最终在线选择。它被设计成一个运行在商用PC硬件上的两级触发器和事件过滤器。目前,该系统由大约850个处理节点组成,并将随着LHC亮度的预期增加而逐步扩展到大约2300个节点。HLT应用程序中的事件选择由专门的重建算法执行。可以通过存储在中央数据库中的属性来控制选择,并在HLT进程启动时检索这些属性,然后HLT进程通常会连续运行许多小时。为了能够对LHC束流条件的变化做出反应,必须能够在不中断数据采集的情况下重新配置算法,同时确保整个HLT场的配置一致且可重复。为实现这些实时配置变化而开发的技术将以两种应用为基础进行举例说明:触发预刻度和波束点测量。预缩放值确定正在执行HLT算法的事件的比例,包括何时停用该算法。该功能在HLT调试阶段以及在LHC运行期间调整记录的物理事件混合中都是必不可少的。主事件顶点分布,从中可以提取光束光斑的位置和大小,由每个节点上的专用HLT算法测量,并定期在HLT场上聚合,其参数被发布并存储在条件数据库中。结果可以反馈到HLT算法中,以保持选择效率和拒绝率。最后,将展示用于允许在线环境中数千个应用程序同时访问数据库的技术。
{"title":"Real-time configuration changes of the ATLAS High Level Trigger","authors":"F. Winklmeier","doi":"10.1109/RTC.2010.5750407","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750407","url":null,"abstract":"The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 processing nodes and will be extended incrementally following the expected increase in luminosity of the LHC to about 2300 nodes. The event selection within the HLT applications is carried out by specialized reconstruction algorithms. The selection can be controlled via properties that are stored in a central database and are retrieved at the startup of the HLT processes, which then usually run continuously for many hours. To be able to respond to changes in the LHC beam conditions, it is essential that the algorithms can be re-configured without disrupting data taking while ensuring a consistent and reproducible configuration across the entire HLT farm. The techniques developed to allow these real-time configuration changes will be exemplified on the basis of two applications: trigger prescales and beamspot measurement. The prescale value determines the fraction of events an HLT algorithm is being executed on, including when it is deactivated. This feature is both essential during the commissioning phase of the HLT as well as for adjusting the mixture of recorded physics events during an LHC run. The primary event vertex distribution, from which the beam spot position and size can be extracted, is measured by a dedicated HLT algorithm on each node and periodically aggregated across the HLT farm and its parameters are published and stored in the conditions database. The result can be fed back to the HLT algorithms to maintain selection efficiency and rejections rates. Finally, the technologies employed to allow the simultaneous database access of thousands of applications in an online environment will be shown.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116878279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2010 17th IEEE-NPSS Real Time Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1