Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750389
C. Patauner, A. Marchioro, S. Bonacini, A. Rehman, W. Pribyl
This paper presents a compression system optimized for the reduction of data from pulse digitizing electronics.
本文提出了一种针对脉冲数字化电子数据压缩优化的压缩系统。
{"title":"A lossless data compression system for a real-time application in HEP data acquisition","authors":"C. Patauner, A. Marchioro, S. Bonacini, A. Rehman, W. Pribyl","doi":"10.1109/RTC.2010.5750389","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750389","url":null,"abstract":"This paper presents a compression system optimized for the reduction of data from pulse digitizing electronics.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122613023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750354
Jinyuan Wu
MicroBooNE is a liquid Argon time projection chamber to be built at Fermilab for an accelerator-based neutrino physics experiment and as part of the R&D strategy for a large liquid argon detector at DUSEL. The waveforms of the ∼9000 sense wires in the chamber are continuously digitized at 2 M samples/s - which results in a large volume of data coming off the TPC. We have developed a lossless data reduction scheme based on Huffman Coding and have tested the scheme on cosmic ray data taken from a small liquid Argon time projection chamber, the BO detector. For sense wire waveforms produced by cosmic ray tracks, the Huffman Coding scheme compresses the data by a factor of approximately 10. The compressed data can be fully recovered back to the original data since the compression is lossless. In addition to accelerator neutrino data, which comes with small duty cycle in sync with the accelerator beam spill, continuous digitized waveforms are to be temporarily stored in the MicroBooNE data-acquisition system for about an hour, long enough for an external alert from possible supernova events. Another scheme, Dynamic Decimation, has been developed to compress further the potential supernova data so that the storage can be implemented within a reasonable budget. In the Dynamic Decimation scheme, data are sampled at the full sampling rate in the regions-of-interest (ROI) containing waveforms of track-hits and are decimated down to lower sampling rate outside the ROI. Note that unlike in typical zero-suppression schemes, in Dynamic Decimation, the data in the pedestal region are not thrown away but kept at a lower sampling rate. An additional factor of 10 compression ratio is achieved using the Dynamic Decimation scheme on the BO detector data, making a total compression rate of approximate 100 when the Dynamic Decimation and the Huffman Coding functional blocks are cascaded. Both of the blocks are compiled in low-cost FPGA and their silicon resource usages are low.
{"title":"Data reduction processes using FPGA for MicroBooNE liquid argon time projection chamber","authors":"Jinyuan Wu","doi":"10.1109/RTC.2010.5750354","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750354","url":null,"abstract":"MicroBooNE is a liquid Argon time projection chamber to be built at Fermilab for an accelerator-based neutrino physics experiment and as part of the R&D strategy for a large liquid argon detector at DUSEL. The waveforms of the ∼9000 sense wires in the chamber are continuously digitized at 2 M samples/s - which results in a large volume of data coming off the TPC. We have developed a lossless data reduction scheme based on Huffman Coding and have tested the scheme on cosmic ray data taken from a small liquid Argon time projection chamber, the BO detector. For sense wire waveforms produced by cosmic ray tracks, the Huffman Coding scheme compresses the data by a factor of approximately 10. The compressed data can be fully recovered back to the original data since the compression is lossless. In addition to accelerator neutrino data, which comes with small duty cycle in sync with the accelerator beam spill, continuous digitized waveforms are to be temporarily stored in the MicroBooNE data-acquisition system for about an hour, long enough for an external alert from possible supernova events. Another scheme, Dynamic Decimation, has been developed to compress further the potential supernova data so that the storage can be implemented within a reasonable budget. In the Dynamic Decimation scheme, data are sampled at the full sampling rate in the regions-of-interest (ROI) containing waveforms of track-hits and are decimated down to lower sampling rate outside the ROI. Note that unlike in typical zero-suppression schemes, in Dynamic Decimation, the data in the pedestal region are not thrown away but kept at a lower sampling rate. An additional factor of 10 compression ratio is achieved using the Dynamic Decimation scheme on the BO detector data, making a total compression rate of approximate 100 when the Dynamic Decimation and the Huffman Coding functional blocks are cascaded. Both of the blocks are compiled in low-cost FPGA and their silicon resource usages are low.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125734073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Chilingaryan, A. Kopmann, A. Mirone, T. Rolo, M. Vogelgesang
Current imaging experiments at synchrotron beam lines often lack a real-time data assessment. X-ray imaging cameras installed at synchrotron facilities like ANKA provide millions of pixels, each with a resolution of 12 bits or more, and take up to several thousand frames per second. A given experiment can produce data sets of multiple gigabytes in a few seconds. Up to now the data is stored in local memory, transferred to mass storage, and then processed and analyzed off-line. The data quality and thus the success of the experiment, can, therefore, only be judged with a substantial delay, which makes an immediate monitoring of the results impossible. To optimize the usage of the micro-tomography beam-line at ANKA we have ported the reconstruction software to modern graphic adapters which offer an enormous amount of calculation power. We were able to reduce the reconstruction time from multiple hours to just a few minutes with a sample dataset of 20 GB. Using the new reconstruction software it is possible to provide a near real-time visualization and significantly reduce the time needed for the first evaluation of the reconstructed sample. The main paradigm of our approach is 100% utilization of all system resources. The compute intensive parts are offloaded to the GPU. While the GPU is reconstructing one slice, the CPUs are used to prepare the next one. A special attention is devoted to minimize data transfers between the host and GPU memory and to execute I/O operations in parallel with the computations. It could be shown that for our application not the computational part but the data transfers are now limiting the speed of the reconstruction. Several changes in the architecture of the DAQ system are proposed to overcome this second bottleneck. The article will introduce the system architecture, describe the hardware platform in details, and analyze performance gains during the first half year of operation.
{"title":"A GPU-based architecture for real-time data assessment at synchrotron experiments","authors":"S. Chilingaryan, A. Kopmann, A. Mirone, T. Rolo, M. Vogelgesang","doi":"10.1145/2148600.2148627","DOIUrl":"https://doi.org/10.1145/2148600.2148627","url":null,"abstract":"Current imaging experiments at synchrotron beam lines often lack a real-time data assessment. X-ray imaging cameras installed at synchrotron facilities like ANKA provide millions of pixels, each with a resolution of 12 bits or more, and take up to several thousand frames per second. A given experiment can produce data sets of multiple gigabytes in a few seconds. Up to now the data is stored in local memory, transferred to mass storage, and then processed and analyzed off-line. The data quality and thus the success of the experiment, can, therefore, only be judged with a substantial delay, which makes an immediate monitoring of the results impossible. To optimize the usage of the micro-tomography beam-line at ANKA we have ported the reconstruction software to modern graphic adapters which offer an enormous amount of calculation power. We were able to reduce the reconstruction time from multiple hours to just a few minutes with a sample dataset of 20 GB. Using the new reconstruction software it is possible to provide a near real-time visualization and significantly reduce the time needed for the first evaluation of the reconstructed sample. The main paradigm of our approach is 100% utilization of all system resources. The compute intensive parts are offloaded to the GPU. While the GPU is reconstructing one slice, the CPUs are used to prepare the next one. A special attention is devoted to minimize data transfers between the host and GPU memory and to execute I/O operations in parallel with the computations. It could be shown that for our application not the computational part but the data transfers are now limiting the speed of the reconstruction. Several changes in the architecture of the DAQ system are proposed to overcome this second bottleneck. The article will introduce the system architecture, describe the hardware platform in details, and analyze performance gains during the first half year of operation.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127257477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750456
Y. Tsyganov, A. Polyakov, A. Sukhov, V. Subbotin, A. Voinov
During the recent years, at the FLNR (JINR) a successful cycle of experiments has been accomplished on the synthesis of the superheavy elements with Z=112–118 with 48Ca beam. From the viewpoint of the detection of rare decays and background suppression, this success was achieved due to the application of a new radical technique - the method of active correlations. The method employs search in a real-time mode for a pointer to a probable correlation like recoil-alpha for switching the beam off. In the case of detection in the same detector strip an additional alpha-decay event, of “beam OFF” time interval is prolonged automatically. Reasonable scenarios of developing the method are considered. PC based data acquisition system as well as the monitoring and control system of the Dubna Gas Filled Recoil Separator is considered in brief too.
{"title":"Method of active correlations: Present status","authors":"Y. Tsyganov, A. Polyakov, A. Sukhov, V. Subbotin, A. Voinov","doi":"10.1109/RTC.2010.5750456","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750456","url":null,"abstract":"During the recent years, at the FLNR (JINR) a successful cycle of experiments has been accomplished on the synthesis of the superheavy elements with Z=112–118 with 48Ca beam. From the viewpoint of the detection of rare decays and background suppression, this success was achieved due to the application of a new radical technique - the method of active correlations. The method employs search in a real-time mode for a pointer to a probable correlation like recoil-alpha for switching the beam off. In the case of detection in the same detector strip an additional alpha-decay event, of “beam OFF” time interval is prolonged automatically. Reasonable scenarios of developing the method are considered. PC based data acquisition system as well as the monitoring and control system of the Dubna Gas Filled Recoil Separator is considered in brief too.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129086657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750347
C. Borer
This paper focuses on the operation of the ATLAS data acquisition system during the first months of 2010. ATLAS is one of the two multipurpose detectors at the Large Hadron Collider (LHC), which provides proton-proton collisions at the unprecedented centre-of-mass energy of 7 TeV. The ATLAS data acquisition system is based on O(2k) processing nodes, interconnected by a multi-layer Gigabit Ethernet network. About 20k applications will provide the needed capabilities in terms of run control, event selection, data flow, local storage and data monitoring. The whole data acquisition system has been successfully commissioned during the last two years with cosmic ray and calibration data and it turned out to be robust and reliable. Nevertheless, the continuous operation with beams, the concurrent trigger commissioning, and the understanding of detector and physics performance will pose new challenges. The flexibility of the data acquisition infrastructure will be probed and exploited, in order to comply with the consequent unpredictable working conditions in terms of data-flow, monitoring and configuration requirements. Concerning the latter in particular, the data acquisition efficiency will have to be kept under control, profiting by the special tools and techniques especially put in place. The goal is to minimise both downtime and dead-time, allowing for runtime reconfiguration of the data acquisition and sub-detectors systems as well as for automatic error handling and recovery.
{"title":"Overview of the ATLAS data acquisition system operating at the TeV scale","authors":"C. Borer","doi":"10.1109/RTC.2010.5750347","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750347","url":null,"abstract":"This paper focuses on the operation of the ATLAS data acquisition system during the first months of 2010. ATLAS is one of the two multipurpose detectors at the Large Hadron Collider (LHC), which provides proton-proton collisions at the unprecedented centre-of-mass energy of 7 TeV. The ATLAS data acquisition system is based on O(2k) processing nodes, interconnected by a multi-layer Gigabit Ethernet network. About 20k applications will provide the needed capabilities in terms of run control, event selection, data flow, local storage and data monitoring. The whole data acquisition system has been successfully commissioned during the last two years with cosmic ray and calibration data and it turned out to be robust and reliable. Nevertheless, the continuous operation with beams, the concurrent trigger commissioning, and the understanding of detector and physics performance will pose new challenges. The flexibility of the data acquisition infrastructure will be probed and exploited, in order to comply with the consequent unpredictable working conditions in terms of data-flow, monitoring and configuration requirements. Concerning the latter in particular, the data acquisition efficiency will have to be kept under control, profiting by the special tools and techniques especially put in place. The goal is to minimise both downtime and dead-time, allowing for runtime reconfiguration of the data acquisition and sub-detectors systems as well as for automatic error handling and recovery.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115164627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750448
F. Clemêncio, C. Loureiro, J. Landeck
A complex task for pet cameras is the design of an appropriate coincidence-detection trigger system as it usually encompasses coincidences in a large number of channels and tight time specifications. Those requirements are even greater for a resistive plate chamber (RPC)-based detector technology as the time window specification is quite small (in the order of a few hundred picoseconds) and the number of coincidence-channels can be quite large.
{"title":"On-line trigger processing for a small animal RPC-Pet camera","authors":"F. Clemêncio, C. Loureiro, J. Landeck","doi":"10.1109/RTC.2010.5750448","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750448","url":null,"abstract":"A complex task for pet cameras is the design of an appropriate coincidence-detection trigger system as it usually encompasses coincidences in a large number of channels and tight time specifications. Those requirements are even greater for a resistive plate chamber (RPC)-based detector technology as the time window specification is quite small (in the order of a few hundred picoseconds) and the number of coincidence-channels can be quite large.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121019485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750487
J. Paley, S. Coda, B. Duval, F. Felici, J. Moret
A new modular, digital, distributed feedback control system has been developed and installed to control the TCV plasma. With many more inputs and outputs, it provides the possibility to build control algorithms using far more information on the plasma state than previously possible as well as the ability to control many more actuators, including the multi-megawatt, multi-launcher electron cyclotron heating and current drive system. This paper provides an overview of the new control system, its integration into the TCV systems and its successful application to control the TCV plasma discharge.
{"title":"Architecture and commissioning of the TCV distributed feedback control system","authors":"J. Paley, S. Coda, B. Duval, F. Felici, J. Moret","doi":"10.1109/RTC.2010.5750487","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750487","url":null,"abstract":"A new modular, digital, distributed feedback control system has been developed and installed to control the TCV plasma. With many more inputs and outputs, it provides the possibility to build control algorithms using far more information on the plasma state than previously possible as well as the ability to control many more actuators, including the multi-megawatt, multi-launcher electron cyclotron heating and current drive system. This paper provides an overview of the new control system, its integration into the TCV systems and its successful application to control the TCV plasma discharge.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121844377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750386
A. Aloisio, R. Giordano, V. Izzo
Embedded Delay Locked Loops (DLLs) and Phase Locked Loops (PLLs) are available as hard-macros in the latest Field Programmable Gate Arrays. The main features offered by DLLs and PLLs are clock phase de-skewing, frequency synthesis (multiplication or division) and jitter filtering. The clock signal at the output of a DLL or a PLL has a phase noise (or jitter), which has to be taken into account in timing sensitive applications, such as analog-to-digital conversion, time measurements or high-speed serial links. In this work we present the results of jitter analysis conducted on PLLs and DLLs embedded in a Xilinx Virtex 5 FPGA. We explored different configurations (clock multiplication and clock network de-skew) of PLLs and DLLs, at different frequencies.
{"title":"Jitter issues in clock conditioning with FPGAs","authors":"A. Aloisio, R. Giordano, V. Izzo","doi":"10.1109/RTC.2010.5750386","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750386","url":null,"abstract":"Embedded Delay Locked Loops (DLLs) and Phase Locked Loops (PLLs) are available as hard-macros in the latest Field Programmable Gate Arrays. The main features offered by DLLs and PLLs are clock phase de-skewing, frequency synthesis (multiplication or division) and jitter filtering. The clock signal at the output of a DLL or a PLL has a phase noise (or jitter), which has to be taken into account in timing sensitive applications, such as analog-to-digital conversion, time measurements or high-speed serial links. In this work we present the results of jitter analysis conducted on PLLs and DLLs embedded in a Xilinx Virtex 5 FPGA. We explored different configurations (clock multiplication and clock network de-skew) of PLLs and DLLs, at different frequencies.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121764554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750327
R. Larsen
After several years of planning and workshop meetings, a decision was reached in late 2008 to organize PICMG xTCA for Physics Technical Subcommittees to extend the ATCA and MTCA telecom standards for enhanced system performance, availability and interoperability for physics controls and applications hardware and software. Since formation in May–June 2009, the Hardware Technical Subcommittee has developed a number of ATCA, ARTM, AMC, MTCA and RTM extensions to be completed in mid-to-late 2010. The Software Technical Subcommittee is developing guidelines to promote interoperability of modules designed by industry and laboratories, in particular focusing on middleware and generic application interfaces such as Standard Process Model, Standard Device Model and Standard Hardware API. The paper describes the prototype design work completed by the lab-industry partners to date, the timeline for hardware releases to PICMG for approval, and the status of the software guidelines roadmap. The paper also briefly summarizes the program of the 4th xTCA for Physics Workshop immediately preceding the RT2010 Conference.
{"title":"PICMG xTCA standards extensions for Physics: New developments and future plans","authors":"R. Larsen","doi":"10.1109/RTC.2010.5750327","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750327","url":null,"abstract":"After several years of planning and workshop meetings, a decision was reached in late 2008 to organize PICMG xTCA for Physics Technical Subcommittees to extend the ATCA and MTCA telecom standards for enhanced system performance, availability and interoperability for physics controls and applications hardware and software. Since formation in May–June 2009, the Hardware Technical Subcommittee has developed a number of ATCA, ARTM, AMC, MTCA and RTM extensions to be completed in mid-to-late 2010. The Software Technical Subcommittee is developing guidelines to promote interoperability of modules designed by industry and laboratories, in particular focusing on middleware and generic application interfaces such as Standard Process Model, Standard Device Model and Standard Hardware API. The paper describes the prototype design work completed by the lab-industry partners to date, the timeline for hardware releases to PICMG for approval, and the status of the software guidelines roadmap. The paper also briefly summarizes the program of the 4th xTCA for Physics Workshop immediately preceding the RT2010 Conference.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130627419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750344
S. Gorbunov, K. Aamodt, T. Alt, H. Appelshauser, A. Arend, B. Becker, S. Bottger, T. Breitner, H. Busching, S. Chattopadhyay, J. Cleymans, I. Das, O. Djuvsland, H. Erdal, R. Fearick, O. Haaland, P. Hille, S. Kalcher, K. Kanaki, U. Kebschull, I. Kisel, M. Kretz, C. Lara, S. Lindal, V. Lindenstruth, A. Masoodi, G. Ovrebekk, R. Panse, J. Peschek, M. Płoskoń, M. Richter, D. Rohr, D. Røhrich, B. Skaali, T. Steinbeck, A. Szostak, J. Thader, T. Tveter, K. Ullaland, Z. Vilakazi, R. Weis, P. Zelnicek
The on-line event reconstruction in ALICE is performed by the High Level Trigger, which should process up to 2000 events per second in proton-proton collisions and up to 200 central events per second in heavy-ion collisions, corresponding to an input data stream of 30 GB/s.
{"title":"ALICE HLT high speed tracking and vertexing","authors":"S. Gorbunov, K. Aamodt, T. Alt, H. Appelshauser, A. Arend, B. Becker, S. Bottger, T. Breitner, H. Busching, S. Chattopadhyay, J. Cleymans, I. Das, O. Djuvsland, H. Erdal, R. Fearick, O. Haaland, P. Hille, S. Kalcher, K. Kanaki, U. Kebschull, I. Kisel, M. Kretz, C. Lara, S. Lindal, V. Lindenstruth, A. Masoodi, G. Ovrebekk, R. Panse, J. Peschek, M. Płoskoń, M. Richter, D. Rohr, D. Røhrich, B. Skaali, T. Steinbeck, A. Szostak, J. Thader, T. Tveter, K. Ullaland, Z. Vilakazi, R. Weis, P. Zelnicek","doi":"10.1109/RTC.2010.5750344","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750344","url":null,"abstract":"The on-line event reconstruction in ALICE is performed by the High Level Trigger, which should process up to 2000 events per second in proton-proton collisions and up to 200 central events per second in heavy-ion collisions, corresponding to an input data stream of 30 GB/s.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133805208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}