Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158126
Chia-Chih Yen, Shen-Tien Lin, Kai Yang, Jerome Peillat, Paul Gibson, E. Auvray
Traditional failure analysis (FA) process proceeds by investigating the tester results of several suspected silicon signals, and then applying CAD tools to navigate and compare pre-silicon design behaviors. However, existing CAD tools usually lack of design visibility due to the imperfect link between test and design environments. In this paper, we introduce a series of design visibility enhancement tools to augment FA process flow. These tools not only feature design comprehension and logic tracing capability, but also expand and correlate silicon data to design functionality. With the seamless visibility enhancement environment, we show the FA process can be performed more efficiently.
{"title":"Software-enabled design visibility enhancement for failure analysis process improvement","authors":"Chia-Chih Yen, Shen-Tien Lin, Kai Yang, Jerome Peillat, Paul Gibson, E. Auvray","doi":"10.1109/VDAT.2009.5158126","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158126","url":null,"abstract":"Traditional failure analysis (FA) process proceeds by investigating the tester results of several suspected silicon signals, and then applying CAD tools to navigate and compare pre-silicon design behaviors. However, existing CAD tools usually lack of design visibility due to the imperfect link between test and design environments. In this paper, we introduce a series of design visibility enhancement tools to augment FA process flow. These tools not only feature design comprehension and logic tracing capability, but also expand and correlate silicon data to design functionality. With the seamless visibility enhancement environment, we show the FA process can be performed more efficiently.","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116689830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158104
Prasad Avss, Sasidharan Prasant, R. Jain
With the advancement in technology, more and more functionality is being integrated into SoCs. A typical SoC contains one or more micro-controllers, several peripherals and embedded memories. In the software arena, there is a whole lot of embedded software that goes into products, built using these complex SoCs. In this era of consumer driven economy, all the product design groups are under a tremendous pressure to meet the aggressive time-to-market schedules and still deliver the right solution the first time. This creates a need for having a robust product flow, which enables different teams to work simultaneously and coherently. Following are some of the key activities in any product development flow. • System Engineering • Map customer requirements to design features. • Optimize design to meet the requirements in the best possible way. • Hardware design • Design, develop and integrate different Hardware (HW) or design modules/blocks • Develop reference models for validating different modules/blocks/sub-systems • Software development • Design, develop and integrate different Software (SW) modules • Develop reference models for validating these modules/sub-systems • System Validation • Build a system • Port the software onto the system • Validate the system with true system scenarios. • Customer Delivery
{"title":"Virtual prototyping increases productivity - A case study","authors":"Prasad Avss, Sasidharan Prasant, R. Jain","doi":"10.1109/VDAT.2009.5158104","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158104","url":null,"abstract":"With the advancement in technology, more and more functionality is being integrated into SoCs. A typical SoC contains one or more micro-controllers, several peripherals and embedded memories. In the software arena, there is a whole lot of embedded software that goes into products, built using these complex SoCs. In this era of consumer driven economy, all the product design groups are under a tremendous pressure to meet the aggressive time-to-market schedules and still deliver the right solution the first time. This creates a need for having a robust product flow, which enables different teams to work simultaneously and coherently. Following are some of the key activities in any product development flow. • System Engineering • Map customer requirements to design features. • Optimize design to meet the requirements in the best possible way. • Hardware design • Design, develop and integrate different Hardware (HW) or design modules/blocks • Develop reference models for validating different modules/blocks/sub-systems • Software development • Design, develop and integrate different Software (SW) modules • Develop reference models for validating these modules/sub-systems • System Validation • Build a system • Port the software onto the system • Validate the system with true system scenarios. • Customer Delivery","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"248 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126788539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158118
Chih-Hsien Lin, Yi-Hsien Lin, Chih-Feng Wu, M. Shiue, Chorng-Kuang Wang
Based on SR transformation, a cost efficient FEQ is proposed for OFDM transceiver of IEEE 802.16a WMAN without SNR loss over the multipath fading channel. The cost efficient FEQ is composed of three parts: channel estimation, filtering and updating processes. Significantly, the computing complexity of multiplication for the cost efficient approach can totally yield 19% reduction compared with the conventional approach. In view of the memory arrangement in VLSI design, the area and power can be decreased by 70% and 50% respectively for the channel estimation. In the updating, 18% reduction is obtained for both area and power. According to the uncoded SER simulation, the proposed approach is identical with the conventional approach. Finally, the cost efficient FEQ is demonstrated by FPGA board.
{"title":"Cost efficient FEQ implementation for IEEE 802.16a OFDM transceiver","authors":"Chih-Hsien Lin, Yi-Hsien Lin, Chih-Feng Wu, M. Shiue, Chorng-Kuang Wang","doi":"10.1109/VDAT.2009.5158118","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158118","url":null,"abstract":"Based on SR transformation, a cost efficient FEQ is proposed for OFDM transceiver of IEEE 802.16a WMAN without SNR loss over the multipath fading channel. The cost efficient FEQ is composed of three parts: channel estimation, filtering and updating processes. Significantly, the computing complexity of multiplication for the cost efficient approach can totally yield 19% reduction compared with the conventional approach. In view of the memory arrangement in VLSI design, the area and power can be decreased by 70% and 50% respectively for the channel estimation. In the updating, 18% reduction is obtained for both area and power. According to the uncoded SER simulation, the proposed approach is identical with the conventional approach. Finally, the cost efficient FEQ is demonstrated by FPGA board.","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130682834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158160
K. Noda, H. Ito, K. Hatayama, T. Aikyo
Issues on power consumption and IR-drop in testing become serious problems. Some troubles, such as tester fails due to too much power consumption or IR-drop, test escapes due to slowed clock cycle, and so on, can happen in test floors. In this paper, we propose a power and noise aware scan test method. In the method, power-aware DFT and power-aware ATPG are executed based on the preliminary power/noise estimation for test. Experimental results illustrate the effect of reducing IR-drop for both shift and capture mode in scan test.
{"title":"Power and noise aware test using preliminary estimation","authors":"K. Noda, H. Ito, K. Hatayama, T. Aikyo","doi":"10.1109/VDAT.2009.5158160","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158160","url":null,"abstract":"Issues on power consumption and IR-drop in testing become serious problems. Some troubles, such as tester fails due to too much power consumption or IR-drop, test escapes due to slowed clock cycle, and so on, can happen in test floors. In this paper, we propose a power and noise aware scan test method. In the method, power-aware DFT and power-aware ATPG are executed based on the preliminary power/noise estimation for test. Experimental results illustrate the effect of reducing IR-drop for both shift and capture mode in scan test.","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130773835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158107
Rung-Bin Lin, Tsung-Han Lin, Shin-An Wu
This article studies a new circuit acyclic clustering problem which divides a combinational circuit into groups of sub-circuits, each of which has limited numbers of inputs and outputs. Several heuristics are proposed to solving this problem. We achieve 300% speedup on logic simulation, with an application of our approach, for finding an input vector that incurs minimum or maximum leakage power dissipation.
{"title":"Circuit acyclic clustering with input/output constraints and applications","authors":"Rung-Bin Lin, Tsung-Han Lin, Shin-An Wu","doi":"10.1109/VDAT.2009.5158107","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158107","url":null,"abstract":"This article studies a new circuit acyclic clustering problem which divides a combinational circuit into groups of sub-circuits, each of which has limited numbers of inputs and outputs. Several heuristics are proposed to solving this problem. We achieve 300% speedup on logic simulation, with an application of our approach, for finding an input vector that incurs minimum or maximum leakage power dissipation.","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131173020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158093
Chi-Wai Leng, Chun-Hung Yang, Chien-Hung Tsai
Based on digital controllers offering significant advantages in DC-DC converters, this paper proposes a digital PWM controller for single-inductor dual-output (SIDO) switching converter operating in discontinuous-conduction mode (DCM). By adopting time-multiplexing (TM) scheme, this converter provides two independent supply voltages using only one inductor, which is suitable for portable devices and system-on-chip (SoC) integration. All design issues of each block including analog-to-digital converter (ADC), digital compensator and digital pulse width modulator (DPWM) are discussed. To save chip area, single look-up table based compensator and modified hybrid DPWM are developed. Simulation results are shown to verify the validity of the proposed work.
{"title":"Digital PWM controller for SIDO switching converter with time-multiplexing scheme","authors":"Chi-Wai Leng, Chun-Hung Yang, Chien-Hung Tsai","doi":"10.1109/VDAT.2009.5158093","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158093","url":null,"abstract":"Based on digital controllers offering significant advantages in DC-DC converters, this paper proposes a digital PWM controller for single-inductor dual-output (SIDO) switching converter operating in discontinuous-conduction mode (DCM). By adopting time-multiplexing (TM) scheme, this converter provides two independent supply voltages using only one inductor, which is suitable for portable devices and system-on-chip (SoC) integration. All design issues of each block including analog-to-digital converter (ADC), digital compensator and digital pulse width modulator (DPWM) are discussed. To save chip area, single look-up table based compensator and modified hybrid DPWM are developed. Simulation results are shown to verify the validity of the proposed work.","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134166330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158157
Karthik Jayaraman, Q. Khan, P. Chiang, B. Chi
A CMOS peak detector for 1–60GHz RF applications is presented. This peak detector tracks the output voltage of a LNA/VCO and the measured signal is used to tune the LNA/VCO to the desired frequency. Different peak detector circuit topologies are analyzed and their performance metrics such as gain, bandwidth and nature of response are compared. The peak detectors were designed for low (2.4GHz) and high (55–60 GHz) frequency application and tested using two sample LNAs at their respective frequencies. While one of the proposed CMOS peak detectors (90nm) exploits the higher ƒT to achieve 60GHz operation with optimal power consumption and area overhead, the other low frequency peak detector was designed in 180nm CMOS. The peak detector is compared with the state of the art detectors. The main advantage of this detector is its minimal area overhead and power consumption.
{"title":"Design and analysis of 1–60GHz, RF CMOS peak detectors for LNA calibration","authors":"Karthik Jayaraman, Q. Khan, P. Chiang, B. Chi","doi":"10.1109/VDAT.2009.5158157","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158157","url":null,"abstract":"A CMOS peak detector for 1–60GHz RF applications is presented. This peak detector tracks the output voltage of a LNA/VCO and the measured signal is used to tune the LNA/VCO to the desired frequency. Different peak detector circuit topologies are analyzed and their performance metrics such as gain, bandwidth and nature of response are compared. The peak detectors were designed for low (2.4GHz) and high (55–60 GHz) frequency application and tested using two sample LNAs at their respective frequencies. While one of the proposed CMOS peak detectors (90nm) exploits the higher ƒT to achieve 60GHz operation with optimal power consumption and area overhead, the other low frequency peak detector was designed in 180nm CMOS. The peak detector is compared with the state of the art detectors. The main advantage of this detector is its minimal area overhead and power consumption.","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134278846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158146
Guan-Quan Lin, Zhen-Yu Wang, Shyue-Kung Lu
In this paper, we propose block-level replacement techniques for content-addressable memories. The CAM array is first divided into row banks and column banks. Then, for each divided array (the overlapped CAM cells of a row bank and a column bank), two redundant row blocks are added and reconfiguration is performed at the block level instead of the conventional word level. According to simulation results, the hardware overhead is 1.31% for a 1024 × 1024-bit CAM array. We also analyze the repair rates of our approaches. It is also found that our approach will achieve higher repair rates.
{"title":"Built-in self-repair techniques for content addressable memories","authors":"Guan-Quan Lin, Zhen-Yu Wang, Shyue-Kung Lu","doi":"10.1109/VDAT.2009.5158146","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158146","url":null,"abstract":"In this paper, we propose block-level replacement techniques for content-addressable memories. The CAM array is first divided into row banks and column banks. Then, for each divided array (the overlapped CAM cells of a row bank and a column bank), two redundant row blocks are added and reconfiguration is performed at the block level instead of the conventional word level. According to simulation results, the hardware overhead is 1.31% for a 1024 × 1024-bit CAM array. We also analyze the repair rates of our approaches. It is also found that our approach will achieve higher repair rates.","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131662786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158145
Xuan-Lun Huang, Ping-Ying Kang, Y. Yu, Jiun-Lang Huang
In this paper, we present a histogram-based two-phase calibration technique for capacitor mismatch and comparator offset of 1-bit/stage pipelined Analog-to-Digital Converters (ADCs). In the first phase, it calibrates the missing decision levels by capacitor resizing. Unlike previous works which require large capacitor arrays, only few switches are added to the circuit. The second phase performs missing code elimination. It achieves better calibrated linearity and provides better mismatch tolerance than the traditional digital calibration technique. Simulation results show that the proposed technique effectively improves both the static and dynamic performance.
{"title":"Co-calibration of capacitor mismatch and comparator offset for 1-bit/stage pipelined ADC","authors":"Xuan-Lun Huang, Ping-Ying Kang, Y. Yu, Jiun-Lang Huang","doi":"10.1109/VDAT.2009.5158145","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158145","url":null,"abstract":"In this paper, we present a histogram-based two-phase calibration technique for capacitor mismatch and comparator offset of 1-bit/stage pipelined Analog-to-Digital Converters (ADCs). In the first phase, it calibrates the missing decision levels by capacitor resizing. Unlike previous works which require large capacitor arrays, only few switches are added to the circuit. The second phase performs missing code elimination. It achieves better calibrated linearity and provides better mismatch tolerance than the traditional digital calibration technique. Simulation results show that the proposed technique effectively improves both the static and dynamic performance.","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133885940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-28DOI: 10.1109/VDAT.2009.5158149
Chia-Cheng Lo, Shang-Ta Tsai, Ming-Der Shieh
Reconfigurable hardware is an effective design option to cope with the increasing demands of simultaneous flexibility and computation power in system design. This paper explores techniques to combine the two entropy decoding methods, context-based adaptive binary arithmetic coding (CABAC) and context-based adaptive variable length coding (CAVLC), defined in the H.264 standard using the coarse-grain reconfigurable architecture. Coarsegrain reconfigurable architectures can provide obvious advantages over their fine-grain counterparts for some specific applications. By analyzing the similarities and differences between these two decoding processes, we show how to effectively merge CAVLC into a CABAC decoder. Experimental results reveal that about 1.5K savings in gate counts can be obtained using the proposed reconfigurable cell (RC) architecture, which corresponds to 25.4% area savings in implementing the CAVLC decoder. Moreover, using the idle time in RC arrays, the base cell can be extended to carry out the inverse discrete cosine transform with very limited overhead. Our entropy decoder design, operated in 66 MHz, can decode video sequences at MP@ Level 3.0 under the real-time constraint.
{"title":"A reconfigurable architecture for entropy decoding and IDCT in H.264","authors":"Chia-Cheng Lo, Shang-Ta Tsai, Ming-Der Shieh","doi":"10.1109/VDAT.2009.5158149","DOIUrl":"https://doi.org/10.1109/VDAT.2009.5158149","url":null,"abstract":"Reconfigurable hardware is an effective design option to cope with the increasing demands of simultaneous flexibility and computation power in system design. This paper explores techniques to combine the two entropy decoding methods, context-based adaptive binary arithmetic coding (CABAC) and context-based adaptive variable length coding (CAVLC), defined in the H.264 standard using the coarse-grain reconfigurable architecture. Coarsegrain reconfigurable architectures can provide obvious advantages over their fine-grain counterparts for some specific applications. By analyzing the similarities and differences between these two decoding processes, we show how to effectively merge CAVLC into a CABAC decoder. Experimental results reveal that about 1.5K savings in gate counts can be obtained using the proposed reconfigurable cell (RC) architecture, which corresponds to 25.4% area savings in implementing the CAVLC decoder. Moreover, using the idle time in RC arrays, the base cell can be extended to carry out the inverse discrete cosine transform with very limited overhead. Our entropy decoder design, operated in 66 MHz, can decode video sequences at MP@ Level 3.0 under the real-time constraint.","PeriodicalId":246670,"journal":{"name":"2009 International Symposium on VLSI Design, Automation and Test","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116279037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}