Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750477
G. D'Antona, S. Cirant, M. Davoudi
In this paper the architecture of the MHD control system for the Frascati Tokamak Upgrade (FTU) is presented. A set of hardware consisting of FPGA and DSP modules on PXI bus for executing the control and estimation algorithms is proposed. Data communication among the hardware modules in both on-line experiment mode and off-line data acquisition has been described. A model predictive protection system has been developed to monitor the antenna angles in real-time to predict the rest position of the antennas based on the mechanical model of the antennas mechanical and motor drive system in order to prevent any mechanical damage by alarming the control system and stopping the motors.
{"title":"The MHD control system for the FTU tokamak","authors":"G. D'Antona, S. Cirant, M. Davoudi","doi":"10.1109/RTC.2010.5750477","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750477","url":null,"abstract":"In this paper the architecture of the MHD control system for the Frascati Tokamak Upgrade (FTU) is presented. A set of hardware consisting of FPGA and DSP modules on PXI bus for executing the control and estimation algorithms is proposed. Data communication among the hardware modules in both on-line experiment mode and off-line data acquisition has been described. A model predictive protection system has been developed to monitor the antenna angles in real-time to predict the rest position of the antennas based on the mechanical model of the antennas mechanical and motor drive system in order to prevent any mechanical damage by alarming the control system and stopping the motors.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123565184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750385
S. Yamada, Y. Hayato, M. Ikeno, M. Nakahata, S. Nakayama, Y. Obayashi, K. Okumura, M. Shiozawa, T. Uchida, T. Yokozawa
Super-Kamiokande is a ring imaging Cherenkov detector for astro-particle physics that consists of 50 ktons pure water and about 13000 photomultiplier tubes (PMT). As well as measuring atmospheric and solar neutrinos, one of the main purposes of the detector is to detect neutrinos from a supernova burst. In the case of a nearby supernova burst which occurs at a distance of 500 light years, the neutrino event rate in the Super-Kamiokande detector is expected to reach 30 MHz and it becomes a huge load for the current data acquisition (DAQ) system. Therefore we are developing an independent DAQ system as a backup for such a nearby supernova burst. This system will measure and record total number of hits in the detector using the digitized signals from the current front-end electronics, from which we can obtain a time variation of total charge deposited in the detector during the supernova burst period. The specification of the new system and current status of the development will be reported.
{"title":"Measurement system of light curves from nearby supernova bursts for the Super-Kamiokande experiment","authors":"S. Yamada, Y. Hayato, M. Ikeno, M. Nakahata, S. Nakayama, Y. Obayashi, K. Okumura, M. Shiozawa, T. Uchida, T. Yokozawa","doi":"10.1109/RTC.2010.5750385","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750385","url":null,"abstract":"Super-Kamiokande is a ring imaging Cherenkov detector for astro-particle physics that consists of 50 ktons pure water and about 13000 photomultiplier tubes (PMT). As well as measuring atmospheric and solar neutrinos, one of the main purposes of the detector is to detect neutrinos from a supernova burst. In the case of a nearby supernova burst which occurs at a distance of 500 light years, the neutrino event rate in the Super-Kamiokande detector is expected to reach 30 MHz and it becomes a huge load for the current data acquisition (DAQ) system. Therefore we are developing an independent DAQ system as a backup for such a nearby supernova burst. This system will measure and record total number of hits in the detector using the digitized signals from the current front-end electronics, from which we can obtain a time variation of total charge deposited in the detector during the supernova burst period. The specification of the new system and current status of the development will be reported.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125146554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750387
A. Mann, I. Konorov, Florian Goslich, S. Paul
To address the data rate requirements for upcoming experiments in high energy physics, we present a configurable architecture for data concentration and event building, based on the AdvancedTCA and MicroTCA standards. The core component is a µTCA based module which connects a Lattice ECP3 FPGA to up to 8 front panel fiber ports for data input from front-end electronics. In addition, the fiber ports can distribute synchronization clock and configuration information from a central time distribution system. To buffer the incoming data, the module provides up to 2 soDIMM sockets for standard DDR3 memory modules. With different firmware functionality, the buffer module can then interface to a µTCA shelf backplane via e.g. PCI Express. To allow event building for more than 8 input links, 4 buffer modules can be combined on an ATCA carrier card, which connects to the high speed links on the µTCA connector. The connections between the 4 µTCA cards and the ATCA backplane can then be configured dynamically by a passive crosspoint switch on the ATCA carrier card. Thus, multiple event building topologies can be configured on the carrier card and within the full ATCA shelf to adapt to different system sizes and communication patterns.
{"title":"An AdvancedTCA based data concentrator and event building architecture","authors":"A. Mann, I. Konorov, Florian Goslich, S. Paul","doi":"10.1109/RTC.2010.5750387","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750387","url":null,"abstract":"To address the data rate requirements for upcoming experiments in high energy physics, we present a configurable architecture for data concentration and event building, based on the AdvancedTCA and MicroTCA standards. The core component is a µTCA based module which connects a Lattice ECP3 FPGA to up to 8 front panel fiber ports for data input from front-end electronics. In addition, the fiber ports can distribute synchronization clock and configuration information from a central time distribution system. To buffer the incoming data, the module provides up to 2 soDIMM sockets for standard DDR3 memory modules. With different firmware functionality, the buffer module can then interface to a µTCA shelf backplane via e.g. PCI Express. To allow event building for more than 8 input links, 4 buffer modules can be combined on an ATCA carrier card, which connects to the high speed links on the µTCA connector. The connections between the 4 µTCA cards and the ATCA backplane can then be configured dynamically by a passive crosspoint switch on the ATCA carrier card. Thus, multiple event building topologies can be configured on the carrier card and within the full ATCA shelf to adapt to different system sizes and communication patterns.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117115189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750353
I. Papakonstantinou, C. Soós, S. Papadopoulos, S. Détraz, C. Sigaud, P. Stejskal, S. Storey, J. Troska, F. Vasey
The present paper discusses recent advances on a Passive Optical Network inspired Timing-Trigger and Control scheme for the upgraded Super Large Hadron Collider. The proposed system targets the replacement of the Timing Trigger and Control system installed in the LHC experiments' counting rooms and more specifically the currently known as TTCex to TTCrx link. The timing PON is implemented with commercially available FPGAs and Ethernet PON transceivers and provides a fixed latency gigabit downlink that can carry level 1 trigger accepts and commands as well as an upstream link for feedback from the front-end electronics.
{"title":"Passive Optical Networks for Timing-Trigger and Control applications in high energy physics experiments","authors":"I. Papakonstantinou, C. Soós, S. Papadopoulos, S. Détraz, C. Sigaud, P. Stejskal, S. Storey, J. Troska, F. Vasey","doi":"10.1109/RTC.2010.5750353","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750353","url":null,"abstract":"The present paper discusses recent advances on a Passive Optical Network inspired Timing-Trigger and Control scheme for the upgraded Super Large Hadron Collider. The proposed system targets the replacement of the Timing Trigger and Control system installed in the LHC experiments' counting rooms and more specifically the currently known as TTCex to TTCrx link. The timing PON is implemented with commercially available FPGAs and Ethernet PON transceivers and provides a fixed latency gigabit downlink that can carry level 1 trigger accepts and commands as well as an upstream link for feedback from the front-end electronics.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129168061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750475
P. Zolnierczuk, R. Riedel
PyDas is a set of Python modules that are used to integrate various components of the SNS DAS system. It enables customized automation of neutron scattering experiments in a rapid and flexible manner. It provides wxPython GUIs for routine experiments as well as IPython command line scripting. Matplotlib and NumPy are used for data presentation and simple analysis. We present an overview of SNS Data Acquisition System and PyDas architectures and implementation along with the examples of use. We also discuss plans for future development as well as the challenges that have to be met while maintaining PyDas for 20+ different scientific instruments.
{"title":"Neutron scattering experiment automation with Python","authors":"P. Zolnierczuk, R. Riedel","doi":"10.1109/RTC.2010.5750475","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750475","url":null,"abstract":"PyDas is a set of Python modules that are used to integrate various components of the SNS DAS system. It enables customized automation of neutron scattering experiments in a rapid and flexible manner. It provides wxPython GUIs for routine experiments as well as IPython command line scripting. Matplotlib and NumPy are used for data presentation and simple analysis. We present an overview of SNS Data Acquisition System and PyDas architectures and implementation along with the examples of use. We also discuss plans for future development as well as the challenges that have to be met while maintaining PyDas for 20+ different scientific instruments.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125588867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750349
D. Hadley
The ATLAS Level-1 Calorimeter Trigger is a hardware-based system designed to identify high-pT jets, electron/photon and tau candidates, and to measure total and missing ET in the ATLAS Liquid Argon and Tile calorimeters. It is a pipelined processor system, with a new set of inputs being evaluated every 25 ns. The overall trigger decision has a latency budget of ∼2 µs, including all transmission delays. The calorimeter trigger uses about 7200 reduced-granularity analogue signals, which are first digitized at the 40 MHz LHC bunch-crossing frequency, before being passed to a digital Finite Impulse Response (FIR) filter. Due to latency and chip real-estate constraints, only a simple 5-element filter with limited precision can be used. Nevertheless, this filter achieves a significant reduction in noise, along with improving the bunch-crossing assignment and energy resolution for small signals. The context in which digital filters are used for the ATLAS Level-1 Calorimeter Trigger is presented, before describing the methods used to determine the best filter coefficients for each detector element. The performance of these filters is investigated with commissioning data and cross-checks of the calibration with initial beam data from ATLAS are shown.
{"title":"Digital filtering performance in the ATLAS Level-1 Calorimeter Trigger","authors":"D. Hadley","doi":"10.1109/RTC.2010.5750349","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750349","url":null,"abstract":"The ATLAS Level-1 Calorimeter Trigger is a hardware-based system designed to identify high-pT jets, electron/photon and tau candidates, and to measure total and missing ET in the ATLAS Liquid Argon and Tile calorimeters. It is a pipelined processor system, with a new set of inputs being evaluated every 25 ns. The overall trigger decision has a latency budget of ∼2 µs, including all transmission delays. The calorimeter trigger uses about 7200 reduced-granularity analogue signals, which are first digitized at the 40 MHz LHC bunch-crossing frequency, before being passed to a digital Finite Impulse Response (FIR) filter. Due to latency and chip real-estate constraints, only a simple 5-element filter with limited precision can be used. Nevertheless, this filter achieves a significant reduction in noise, along with improving the bunch-crossing assignment and energy resolution for small signals. The context in which digital filters are used for the ATLAS Level-1 Calorimeter Trigger is presented, before describing the methods used to determine the best filter coefficients for each detector element. The performance of these filters is investigated with commissioning data and cross-checks of the calibration with initial beam data from ATLAS are shown.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132022584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750418
I. Christidi
The ATLAS Inner Detector (ID) trigger algorithms ran online during data taking with proton-proton collisions at the Large Hadron Collider (LHC) in December 2009. Preliminary results on the performance of the algorithms in collisions at centre-of-mass energy of 900GeV are presented, including comparisons to the ATLAS offline tracking algorithms and to simulations. The ATLAS trigger performs the online event selection in three stages. The ID information is used in the second and third triggering stages, called Level-2 trigger (L2) and Event Filter (EF) respectively, and collectively the High Level Triggers (HLT). The HLT runs software algorithms in a large farm of commercial CPUs and is designed to reject collision events in real time, keeping the most interesting few in every thousand. The average execution time per event at L2(EF) is about 40ms(4s) and the ID trigger algorithms can take only a fraction of that. Within this time, the data from interesting regions of the ID have to be accessed from central buffers through the network, unpacked, clustered and converted to the ATLAS global coordinates, then pattern recognition follows to identify the trajectories of charged particles (tracks), and finally these tracks are used in combination with other information to accept or reject events, according to whether they satisfy one or more trigger signatures. The various clients of the ID trigger information impose different constraints in the performance of the pattern recognition, in terms of efficiency and fake rate for tracks. An overview of the different uses of the ID trigger algorithms is given, and their online performance is exemplified with results from the use of L2 tracks for the online determination of the LHC beam position.
{"title":"Performance of the ATLAS Inner Detector trigger algorithms in pp collisions at √s = 900 GeV","authors":"I. Christidi","doi":"10.1109/RTC.2010.5750418","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750418","url":null,"abstract":"The ATLAS Inner Detector (ID) trigger algorithms ran online during data taking with proton-proton collisions at the Large Hadron Collider (LHC) in December 2009. Preliminary results on the performance of the algorithms in collisions at centre-of-mass energy of 900GeV are presented, including comparisons to the ATLAS offline tracking algorithms and to simulations. The ATLAS trigger performs the online event selection in three stages. The ID information is used in the second and third triggering stages, called Level-2 trigger (L2) and Event Filter (EF) respectively, and collectively the High Level Triggers (HLT). The HLT runs software algorithms in a large farm of commercial CPUs and is designed to reject collision events in real time, keeping the most interesting few in every thousand. The average execution time per event at L2(EF) is about 40ms(4s) and the ID trigger algorithms can take only a fraction of that. Within this time, the data from interesting regions of the ID have to be accessed from central buffers through the network, unpacked, clustered and converted to the ATLAS global coordinates, then pattern recognition follows to identify the trajectories of charged particles (tracks), and finally these tracks are used in combination with other information to accept or reject events, according to whether they satisfy one or more trigger signatures. The various clients of the ID trigger information impose different constraints in the performance of the pattern recognition, in terms of efficiency and fake rate for tracks. An overview of the different uses of the ID trigger algorithms is given, and their online performance is exemplified with results from the use of L2 tracks for the online determination of the LHC beam position.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123836782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/TNS.2011.2155084
M. de Gruttola, S. Di Guida, V. Innocente, A. Pierro
Automatic, synchronous and of course reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to automatize the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicate service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements done so far. The experience of this first years of operation will be discussed in detail.
{"title":"Time-critical database conditions data-handling for the CMS experiment","authors":"M. de Gruttola, S. Di Guida, V. Innocente, A. Pierro","doi":"10.1109/TNS.2011.2155084","DOIUrl":"https://doi.org/10.1109/TNS.2011.2155084","url":null,"abstract":"Automatic, synchronous and of course reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to automatize the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicate service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements done so far. The experience of this first years of operation will be discussed in detail.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127810718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750348
J. Lundberg
ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). Its trigger system must reduce the anticipated proton collision rate of up to 40 MHz to a recordable event rate of 100–200 Hz. This is realized through a multi-level trigger system. The first-level trigger is implemented with custom-built electronics and makes an initial selection which reduces the rate to less than 100 kHz. The subsequent trigger selection is done in software run on PC farms. The first-level trigger decision is made by the central-trigger processor using information from coarse grained calorimeter information, dedicated muon-trigger detectors, and a variety of additional trigger inputs from detectors in the forward regions. We present the performance of the first-level trigger during the commissioning of the ATLAS detector during early LHC running. We cover the trigger strategies used during the different machine commissioning phases from first circulating beams and splash events to collisions. It is described how the very first proton events were successfully triggered using signals from scintillator trigger detectors in the forward region. For circulating and colliding beams electrostatic button pick-up detectors were used to clock the arriving proton bunches. These signals were immediately used to aid the timing in of the beams and the ATLAS detector. We describe the performance and timing in of the the first-level Calorimeter and muon trigger systems. The operation of the trigger relies on its real-time monitoring capabilities. We describe how trigger rates, timing information, and dead-time fractions were monitored to ensure the very good performance of the system.
{"title":"Performance of the ATLAS first-level trigger with first LHC data","authors":"J. Lundberg","doi":"10.1109/RTC.2010.5750348","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750348","url":null,"abstract":"ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). Its trigger system must reduce the anticipated proton collision rate of up to 40 MHz to a recordable event rate of 100–200 Hz. This is realized through a multi-level trigger system. The first-level trigger is implemented with custom-built electronics and makes an initial selection which reduces the rate to less than 100 kHz. The subsequent trigger selection is done in software run on PC farms. The first-level trigger decision is made by the central-trigger processor using information from coarse grained calorimeter information, dedicated muon-trigger detectors, and a variety of additional trigger inputs from detectors in the forward regions. We present the performance of the first-level trigger during the commissioning of the ATLAS detector during early LHC running. We cover the trigger strategies used during the different machine commissioning phases from first circulating beams and splash events to collisions. It is described how the very first proton events were successfully triggered using signals from scintillator trigger detectors in the forward region. For circulating and colliding beams electrostatic button pick-up detectors were used to clock the arriving proton bunches. These signals were immediately used to aid the timing in of the beams and the ATLAS detector. We describe the performance and timing in of the the first-level Calorimeter and muon trigger systems. The operation of the trigger relies on its real-time monitoring capabilities. We describe how trigger rates, timing information, and dead-time fractions were monitored to ensure the very good performance of the system.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125787843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-24DOI: 10.1109/RTC.2010.5750338
P. Amaudruz, D. Bishop, N. Braam, C. Gutjahr, D. Karlen, R. Hasanen, R. Henderson, N. Honkanen, B. Kirby, T. Lindner, A. Miller, K. Mizouchi, C. Ohlmann, K. Olchanski, S. Oser, C. Pearson, P. Poffenberger, R. Poutissou, F. Retire, H. Tanaka, J. Zalipska
The T2K Fine Grained Detector is an active neutrino target that uses segmented scintillator bars to observe short-range particle tracks. 8448 multi-pixel photon counters coupled to wavelength shifting fibres detect scintillator light. An application specific integrated circuit shapes the MPPC waveform and uses a switched capacitor array to store up to 511 analog samples over 10.24µs. High and low attenuation channels for each MPPC improve dynamic range. 12-bit serial quad-ADCs digitize ASIC analog output and interface with a field programmable gate array, while each FPGA simultaneously reads out four ADCs and saves the synchronized samples in an external digital memory. The system produces 13.5 MB of uncompressed data per acquisition with a target trigger rate of 20 Hz, and requires zero suppression to reduce data size and readout time. Firmware based data compression uses an online pulse-finder that decides whether to output pulse height information, a section of waveform, or to suppress all data. The front end FPGA transfers formatted data to collector cards through a 2 Gb/s optical fiber interface using an efficient custom protocol. We have evaluated the performance of the FGD electronics system and the quality of its online data compression through the course of a physics data run.
{"title":"Online digital data processing for the T2K Fine Grained Detector","authors":"P. Amaudruz, D. Bishop, N. Braam, C. Gutjahr, D. Karlen, R. Hasanen, R. Henderson, N. Honkanen, B. Kirby, T. Lindner, A. Miller, K. Mizouchi, C. Ohlmann, K. Olchanski, S. Oser, C. Pearson, P. Poffenberger, R. Poutissou, F. Retire, H. Tanaka, J. Zalipska","doi":"10.1109/RTC.2010.5750338","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750338","url":null,"abstract":"The T2K Fine Grained Detector is an active neutrino target that uses segmented scintillator bars to observe short-range particle tracks. 8448 multi-pixel photon counters coupled to wavelength shifting fibres detect scintillator light. An application specific integrated circuit shapes the MPPC waveform and uses a switched capacitor array to store up to 511 analog samples over 10.24µs. High and low attenuation channels for each MPPC improve dynamic range. 12-bit serial quad-ADCs digitize ASIC analog output and interface with a field programmable gate array, while each FPGA simultaneously reads out four ADCs and saves the synchronized samples in an external digital memory. The system produces 13.5 MB of uncompressed data per acquisition with a target trigger rate of 20 Hz, and requires zero suppression to reduce data size and readout time. Firmware based data compression uses an online pulse-finder that decides whether to output pulse height information, a section of waveform, or to suppress all data. The front end FPGA transfers formatted data to collector cards through a 2 Gb/s optical fiber interface using an efficient custom protocol. We have evaluated the performance of the FGD electronics system and the quality of its online data compression through the course of a physics data run.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115373200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}