{"title":"A 13-Bit 10-Mhz Adc Background-Calibrated with Real-Time Oversampling Calibrator","authors":"T. Shu, B. Song, K. Bacrania","doi":"10.1109/VLSIC.1994.586166","DOIUrl":null,"url":null,"abstract":"A real-time digital-domain code-error calibration technique is developed and demonstrated in this fully-Merential5-volt 13-bit IO-= BiCMOS ADC. The calibration process does not interfere with the normal operation of the converter but improves its linearity in real time while the converter is working. The core of this technique is an oversampling sigma-delta ratio calibrator working synchronously with the converter in background.' 1. Introdiictiou Existing self-calibration techniques for data converters interrupt normal conversion cycle for calibration[ I]. Temperature variation, device parameter drift, etc., may cause the calibration data to become invalid unless the converter is periodically re-calibrated. Other analogdomain calibration techniques are limited by the analog signal accuracy and circuit noise[2]. A novel digital-domain, code-error background calibration technique is implemented in this experimental 13-bit 10-MHZ ADC. The calibration procedure is virtually trans arent to the normal converter operation, thus allowing tg e converter to operate continuously even while being calibrated. 2.Real-Time Digital Calibration Technique In a multi-stage type ADC, the linearity of the D-to-A converter @AC) in the first stage determines the overall transfer characteristics of the ADC. To calibrate the ADC it is necessary to correct the linearity error of the fist stage DAC. In this ADC, a resistor-string DAC is used in the first stage because it allows both the calibrator circuit and the ADC amplifier to tap different DAC outputs simultaneously without affecting each other, provided that the DAC outputs settle within the clock phase. The DAC output errors are digitized by the calibrator and later subtracted from the raw output codes with the code-error calibration technique[3], To measure the resistor DAC error, a ratio-measurement method has been used. A switched-capacitor subtracter is used to measure the mid-point voltage error of a given section of the resistor string. A simplified diagram is shown in Figxe 1. Assuming ideal conditions,C, = C2 = C,,","PeriodicalId":350730,"journal":{"name":"Proceedings of 1994 IEEE Symposium on VLSI Circuits","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1994 IEEE Symposium on VLSI Circuits","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VLSIC.1994.586166","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
A real-time digital-domain code-error calibration technique is developed and demonstrated in this fully-Merential5-volt 13-bit IO-= BiCMOS ADC. The calibration process does not interfere with the normal operation of the converter but improves its linearity in real time while the converter is working. The core of this technique is an oversampling sigma-delta ratio calibrator working synchronously with the converter in background.' 1. Introdiictiou Existing self-calibration techniques for data converters interrupt normal conversion cycle for calibration[ I]. Temperature variation, device parameter drift, etc., may cause the calibration data to become invalid unless the converter is periodically re-calibrated. Other analogdomain calibration techniques are limited by the analog signal accuracy and circuit noise[2]. A novel digital-domain, code-error background calibration technique is implemented in this experimental 13-bit 10-MHZ ADC. The calibration procedure is virtually trans arent to the normal converter operation, thus allowing tg e converter to operate continuously even while being calibrated. 2.Real-Time Digital Calibration Technique In a multi-stage type ADC, the linearity of the D-to-A converter @AC) in the first stage determines the overall transfer characteristics of the ADC. To calibrate the ADC it is necessary to correct the linearity error of the fist stage DAC. In this ADC, a resistor-string DAC is used in the first stage because it allows both the calibrator circuit and the ADC amplifier to tap different DAC outputs simultaneously without affecting each other, provided that the DAC outputs settle within the clock phase. The DAC output errors are digitized by the calibrator and later subtracted from the raw output codes with the code-error calibration technique[3], To measure the resistor DAC error, a ratio-measurement method has been used. A switched-capacitor subtracter is used to measure the mid-point voltage error of a given section of the resistor string. A simplified diagram is shown in Figxe 1. Assuming ideal conditions,C, = C2 = C,,