Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364396
Neal Wingen
This paper highlights a series of proven concepts aimed at facilitating the design of next generation systems. Practical system design examples are examined and provide insight on how to cope with today's complex design challenges
{"title":"What If You Could Design Tomorrow's System Today?","authors":"Neal Wingen","doi":"10.1109/DATE.2007.364396","DOIUrl":"https://doi.org/10.1109/DATE.2007.364396","url":null,"abstract":"This paper highlights a series of proven concepts aimed at facilitating the design of next generation systems. Practical system design examples are examined and provide insight on how to cope with today's complex design challenges","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122095033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364579
Jin Cui, Qingxu Deng, Xiuqiang He, Z. Gu
Partially runtime-reconfigurable (PRTR) FPGAs allow hardware tasks to be placed and removed dynamically at runtime. We present an efficient algorithm for finding the complete set of maximal empty rectangles on a 2D PRTR FPGA, which is useful for online placement and scheduling of HW tasks. The algorithm is incremental and only updates the local region affected by each task addition or removal event. We use simulation experiments to evaluate its performance and compare to related work
{"title":"An Efficient Algorithm for Online Management of 2D Area of Partially Reconfigurable FPGAs","authors":"Jin Cui, Qingxu Deng, Xiuqiang He, Z. Gu","doi":"10.1109/DATE.2007.364579","DOIUrl":"https://doi.org/10.1109/DATE.2007.364579","url":null,"abstract":"Partially runtime-reconfigurable (PRTR) FPGAs allow hardware tasks to be placed and removed dynamically at runtime. We present an efficient algorithm for finding the complete set of maximal empty rectangles on a 2D PRTR FPGA, which is useful for online placement and scheduling of HW tasks. The algorithm is incremental and only updates the local region affected by each task addition or removal event. We use simulation experiments to evaluate its performance and compare to related work","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"2004 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128749287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364616
Zahid Khan, T. Arslan
This paper presents pipelined implementation of a real time programmable irregular low density parity check (LDPC) encoder as specified in the IEEE P802.16E/D7 standard. The encoder is programmable for frame sizes from 576 to 2304 and for five different code rates. H matrix is efficiently generated and stored for a particular frame size and code rate. The encoder is implemented on reconfigurable instruction cell architecture which has recently emerged as an ultra low power, high performance, ANSI-C programmable embedded core. Different general and architecture specific optimization techniques are applied to enhance the throughput. With the architecture, a throughput from 10 to 19 Mbps has been achieved. The maximum throughput achieved with pipelining/multi-core is 78 Mbps
{"title":"Pipelined Implementation of a Real Time Programmable Encoder for Low Density Parity Check Code on a Reconfigurable Instruction Cell Architecture","authors":"Zahid Khan, T. Arslan","doi":"10.1109/DATE.2007.364616","DOIUrl":"https://doi.org/10.1109/DATE.2007.364616","url":null,"abstract":"This paper presents pipelined implementation of a real time programmable irregular low density parity check (LDPC) encoder as specified in the IEEE P802.16E/D7 standard. The encoder is programmable for frame sizes from 576 to 2304 and for five different code rates. H matrix is efficiently generated and stored for a particular frame size and code rate. The encoder is implemented on reconfigurable instruction cell architecture which has recently emerged as an ultra low power, high performance, ANSI-C programmable embedded core. Different general and architecture specific optimization techniques are applied to enhance the throughput. With the architecture, a throughput from 10 to 19 Mbps has been achieved. The maximum throughput achieved with pipelining/multi-core is 78 Mbps","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128258200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364586
A. Kokkeler, G. Smit, T. Krol, J. Kuper
In this paper, a two-step methodology is introduced to analyse the mapping of cyclostationary feature detection (CFD) onto a multi-core processing platform. In the first step, the tasks to be executed by each core are determined in a structured way using techniques known from the design of array processors. In the second step, the implementation of tasks on a processing core is analysed. Using this methodology, it is shown that calculating a 127 times 127 discrete spectral correlation function requires approximately 140 mus on a tiled system on chip (SoC) with 4 Montium cores
{"title":"Cyclostationary Feature Detection on a tiled-SoC","authors":"A. Kokkeler, G. Smit, T. Krol, J. Kuper","doi":"10.1109/DATE.2007.364586","DOIUrl":"https://doi.org/10.1109/DATE.2007.364586","url":null,"abstract":"In this paper, a two-step methodology is introduced to analyse the mapping of cyclostationary feature detection (CFD) onto a multi-core processing platform. In the first step, the tasks to be executed by each core are determined in a structured way using techniques known from the design of array processors. In the second step, the implementation of tasks on a processing core is analysed. Using this methodology, it is shown that calculating a 127 times 127 discrete spectral correlation function requires approximately 140 mus on a tiled system on chip (SoC) with 4 Montium cores","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126910256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364575
Paolo Destro, F. Fummi, G. Pravadelli
Separation of HW and SW design flows represents a critical aspect in the development of embedded systems. Co-verification becomes necessary, thus implying the development of complex co-simulation strategies. This paper presents a refinement flow that delays as much as possible the separation between HW and SW concurrent entities (threads), allowing their differentiation, but preserving an homogeneous simulation environment. The approach relies on SystemC as the unique reference language. However, SystemC threads, corresponding to the SW application, are simulated outside the control of the SystemC simulation kernel to exploit the typical features of multi-threading real-time operating systems running on embedded systems. On the contrary HW threads maintain the original simulation semantics of SystemC. This allows designers to effectively tune the SW application before HW/SW partitioning, leaving to an automatic procedure the SW generation, thus avoiding error-prone and time-consuming manual conversions
{"title":"A Smooth Refinement Flow for Co-designing HW and SW Threads","authors":"Paolo Destro, F. Fummi, G. Pravadelli","doi":"10.1109/DATE.2007.364575","DOIUrl":"https://doi.org/10.1109/DATE.2007.364575","url":null,"abstract":"Separation of HW and SW design flows represents a critical aspect in the development of embedded systems. Co-verification becomes necessary, thus implying the development of complex co-simulation strategies. This paper presents a refinement flow that delays as much as possible the separation between HW and SW concurrent entities (threads), allowing their differentiation, but preserving an homogeneous simulation environment. The approach relies on SystemC as the unique reference language. However, SystemC threads, corresponding to the SW application, are simulated outside the control of the SystemC simulation kernel to exploit the typical features of multi-threading real-time operating systems running on embedded systems. On the contrary HW threads maintain the original simulation semantics of SystemC. This allows designers to effectively tune the SW application before HW/SW partitioning, leaving to an automatic procedure the SW generation, thus avoiding error-prone and time-consuming manual conversions","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121160453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forward-error correction (FEC) is of key importance to the robustness of optical communication networks. In particular, inband-FEC is an attractive option, because it improves channel-performance without requiring an increase of the transmission bandwidth. We have devised and implemented a novel inband FEC method, dubbed FOCUS, for the electronic-mitigation of physical impairments in SDH/SONET optical networks. It is an inherently low-cost approach for both the metro and backbone network regions, scalable to any SDH/SONET rate and capable to significantly increase optical channel performance. This paper analyzes the most sophisticated ones from the plethora of optimizations that were employed to minimize the architectural complexity of FOCUS, falling in: a) Arithmetic operator design, b) Resource sharing and c) Redundant logic elimination. These optimizations were necessary to obtain a prototype, which eventually permitted the first fully successful laboratory evaluation of the FOCUS inband-FEC method
{"title":"Optimization of the \"FOCUS\" Inband-FEC Architecture for 10-Gbps SDH/SONET Optical Communication Channels","authors":"A. Tychopoulos, O. Koufopavlou","doi":"10.1145/1266366.1266712","DOIUrl":"https://doi.org/10.1145/1266366.1266712","url":null,"abstract":"Forward-error correction (FEC) is of key importance to the robustness of optical communication networks. In particular, inband-FEC is an attractive option, because it improves channel-performance without requiring an increase of the transmission bandwidth. We have devised and implemented a novel inband FEC method, dubbed FOCUS, for the electronic-mitigation of physical impairments in SDH/SONET optical networks. It is an inherently low-cost approach for both the metro and backbone network regions, scalable to any SDH/SONET rate and capable to significantly increase optical channel performance. This paper analyzes the most sophisticated ones from the plethora of optimizations that were employed to minimize the architectural complexity of FOCUS, falling in: a) Arithmetic operator design, b) Resource sharing and c) Redundant logic elimination. These optimizations were necessary to obtain a prototype, which eventually permitted the first fully successful laboratory evaluation of the FOCUS inband-FEC method","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121389261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364606
Shuilong Huang, Huainan Ma, Zhihua Wang
A set of behavioral voltage-domain verilogA/verilog models allowing a systematic design of the ΣΔ fractional-N frequency synthesizer is discussed in the paper. The approach allows the designer to accurately predict the dynamic or stable characteristic of the closed loop by including nonlinear effects of building blocks in the models. The proposed models are implemented in a three-order ΣΔ fractional-N PLL based frequency synthesizer with a 60MHz frequency tuning range. Cadence SpectreVerilog simulation results show that behavioral modeling can provide a great speed-up over circuit-level simulation. Synchronously, the phase noise, spurs and settling time can also be accurately predicted, so it is helpful to a grasp of the fundamentals at the early stage of the design and optimization design at the system level. The key simulation results have been compared against measured results obtained from an actual prototype validating the effectiveness of the proposed models
{"title":"Modeling and Simulation to the Design of ΣΔ Fractional-N Frequency Synthesizer","authors":"Shuilong Huang, Huainan Ma, Zhihua Wang","doi":"10.1109/DATE.2007.364606","DOIUrl":"https://doi.org/10.1109/DATE.2007.364606","url":null,"abstract":"A set of behavioral voltage-domain verilogA/verilog models allowing a systematic design of the ΣΔ fractional-N frequency synthesizer is discussed in the paper. The approach allows the designer to accurately predict the dynamic or stable characteristic of the closed loop by including nonlinear effects of building blocks in the models. The proposed models are implemented in a three-order ΣΔ fractional-N PLL based frequency synthesizer with a 60MHz frequency tuning range. Cadence SpectreVerilog simulation results show that behavioral modeling can provide a great speed-up over circuit-level simulation. Synchronously, the phase noise, spurs and settling time can also be accurately predicted, so it is helpful to a grasp of the fundamentals at the early stage of the design and optimization design at the system level. The key simulation results have been compared against measured results obtained from an actual prototype validating the effectiveness of the proposed models","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"395 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113998343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-16DOI: 10.1109/DATE.2007.364682
J. Tongbong, S. Mir, J. Carbonéro
For design-for-test (DFT) purposes, analogue and mixed-signal testing has to cope with the difficulty of test evaluation before production. This paper aims at evaluating test measures for RF components in order to optimize production test sets and thus reduce test cost. For this, we have first developed a statistical model of the performances and possible test measures of the circuit under test (a low noise amplifier). The statistical multi-normal model is derived from data obtained using Monte-Carlo circuit simulation (five hundred iterations). This statistical model is then used to generate a larger circuit population (one million instances) from which test metrics can be estimated with ppm precision at the design stage, considering just process deviations. With the use of this model, a trade-off between defect level and yield loss resulting from process deviations is used to set test limits. After fixing test limits, we have carried out a fault simulation campaign to verify the suitability of the different test measurements, targeting both catastrophic and single parametric faults. Catastrophic faults are modelled by shorts and opens. A parametric fault is defined as the minimum value of a physical parameter that causes a specification to be violated. Test metrics are then evaluated for the LNA case-study. As a result, test metrics for functional measurements such as S-parameters and noise figure are compared with low cost test measurements such as RMS and peak-to-peak current consumption and output voltage, input/output impedance, and the correlation between current consumption and output voltage
{"title":"Evaluation of test measures for LNA production testing using a multinormal statistical model","authors":"J. Tongbong, S. Mir, J. Carbonéro","doi":"10.1109/DATE.2007.364682","DOIUrl":"https://doi.org/10.1109/DATE.2007.364682","url":null,"abstract":"For design-for-test (DFT) purposes, analogue and mixed-signal testing has to cope with the difficulty of test evaluation before production. This paper aims at evaluating test measures for RF components in order to optimize production test sets and thus reduce test cost. For this, we have first developed a statistical model of the performances and possible test measures of the circuit under test (a low noise amplifier). The statistical multi-normal model is derived from data obtained using Monte-Carlo circuit simulation (five hundred iterations). This statistical model is then used to generate a larger circuit population (one million instances) from which test metrics can be estimated with ppm precision at the design stage, considering just process deviations. With the use of this model, a trade-off between defect level and yield loss resulting from process deviations is used to set test limits. After fixing test limits, we have carried out a fault simulation campaign to verify the suitability of the different test measurements, targeting both catastrophic and single parametric faults. Catastrophic faults are modelled by shorts and opens. A parametric fault is defined as the minimum value of a physical parameter that causes a specification to be violated. Test metrics are then evaluated for the LNA case-study. As a result, test metrics for functional measurements such as S-parameters and noise figure are compared with low cost test measurements such as RMS and peak-to-peak current consumption and output voltage, input/output impedance, and the correlation between current consumption and output voltage","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114748911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As CMOS IC feature sizes shrink down to the nanometer regime, the need for more efficient test methods capable of dealing with new failure mechanisms increases. Advances in this domain require a detailed knowledge of these failure physical properties and the development of appropriated test methods. Several works have shown the relative increase of resistive defects (both opens and shorts), and that they mainly affect circuit timing rather than impacting its static DC behavior. Defect evolution, together with the increase of parameter variations, represents a serious challenge for traditional delay test methods based on fixed time delay limit setting. One alternative to deal with variation relies on adopting correlation where test limits for one parameter are settled based on its correspondence to other circuit variables. In particular, the correlation of circuit delay to reduced V DD has been proposed as a useful test method. In this work the authors investigate the merits of this technique for future technologies where variation is predicted to increase, analyzing the possibilities of detecting resistive shorts and opens
{"title":"Dynamic Critical Resistance: A Timing-Based Critical Resistance Model for Statistical Delay Testing of Nanometer ICs","authors":"J. Rosselló, Carol de Benito, S. Bota, J. Segura","doi":"10.1145/1266366.1266645","DOIUrl":"https://doi.org/10.1145/1266366.1266645","url":null,"abstract":"As CMOS IC feature sizes shrink down to the nanometer regime, the need for more efficient test methods capable of dealing with new failure mechanisms increases. Advances in this domain require a detailed knowledge of these failure physical properties and the development of appropriated test methods. Several works have shown the relative increase of resistive defects (both opens and shorts), and that they mainly affect circuit timing rather than impacting its static DC behavior. Defect evolution, together with the increase of parameter variations, represents a serious challenge for traditional delay test methods based on fixed time delay limit setting. One alternative to deal with variation relies on adopting correlation where test limits for one parameter are settled based on its correspondence to other circuit variables. In particular, the correlation of circuit delay to reduced V DD has been proposed as a useful test method. In this work the authors investigate the merits of this technique for future technologies where variation is predicted to increase, analyzing the possibilities of detecting resistive shorts and opens","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124502011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasing power density causes die overheating due to limited cooling capacity of the package. Conventional thermal management techniques e.g. logic shutdown, clock gating, frequency scaling, simultaneous voltage-frequency tuning etc. increase the design complexity and/or degrade the performance significantly. In this paper, the authors propose a novel design technique, which makes a circuit amenable to temperature adaptation using dynamic voltage scheduling (DVS). It is accomplished by a synthesis technique that (a) isolates and predicts the set of paths that may become critical under variations, (b) ensures they are activated rarely, and (c) tolerates possible delay failures (at reduced voltage) in these paths by adaptive clock stretching. This allows us to schedule a lower supply voltage during increased temperature without requiring frequency tuning. Simulation results on an example pipeline show that proposed design yields similar temperature reduction as conventional design with only 11% performance penalty and 14% area overhead. The conventional pipeline design, on contrary, leads to 50% performance degradation due to reduced operating frequency
{"title":"Low-Overhead Circuit Synthesis for Temperature Adaptation Using Dynamic Voltage Scheduling","authors":"Swaroop Ghosh, S. Bhunia, K. Roy","doi":"10.1145/1266366.1266702","DOIUrl":"https://doi.org/10.1145/1266366.1266702","url":null,"abstract":"Increasing power density causes die overheating due to limited cooling capacity of the package. Conventional thermal management techniques e.g. logic shutdown, clock gating, frequency scaling, simultaneous voltage-frequency tuning etc. increase the design complexity and/or degrade the performance significantly. In this paper, the authors propose a novel design technique, which makes a circuit amenable to temperature adaptation using dynamic voltage scheduling (DVS). It is accomplished by a synthesis technique that (a) isolates and predicts the set of paths that may become critical under variations, (b) ensures they are activated rarely, and (c) tolerates possible delay failures (at reduced voltage) in these paths by adaptive clock stretching. This allows us to schedule a lower supply voltage during increased temperature without requiring frequency tuning. Simulation results on an example pipeline show that proposed design yields similar temperature reduction as conventional design with only 11% performance penalty and 14% area overhead. The conventional pipeline design, on contrary, leads to 50% performance degradation due to reduced operating frequency","PeriodicalId":298961,"journal":{"name":"2007 Design, Automation & Test in Europe Conference & Exhibition","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127727831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}