Pub Date : 2009-04-15DOI: 10.1109/DDECS.2009.5012086
A. Chichkov
If test is mentioned normally there are several remarks that have been repeated for the last 20 years. ICs are too fast, patterns are too big, testing is too slow, the test development too costly. Although, the advance of the technology has improved test in general, these statements seems to prevail and sound still valid. One reason is that on every improvement of test strategies there is also improvement of technology and design strategy that keeps the gap open. On the other hand ATE equipment inevitably is build with one generation older technology that keeps the challenge of speed noise and complexity alive. In this presentation the following few challenges for test will be further discussed. How is evolving the gap between test methods tools and equipment on one side and technology, design methodology and design tools on the other? How is evolving the cost of production test equipment and as consequence the cost of test? Is there change in the cost of test development? What about high quality and reliability application testing? And last but not least what about research in the test domain during economic crisis.
{"title":"Challenges for test and design for test","authors":"A. Chichkov","doi":"10.1109/DDECS.2009.5012086","DOIUrl":"https://doi.org/10.1109/DDECS.2009.5012086","url":null,"abstract":"If test is mentioned normally there are several remarks that have been repeated for the last 20 years. ICs are too fast, patterns are too big, testing is too slow, the test development too costly. Although, the advance of the technology has improved test in general, these statements seems to prevail and sound still valid. One reason is that on every improvement of test strategies there is also improvement of technology and design strategy that keeps the gap open. On the other hand ATE equipment inevitably is build with one generation older technology that keeps the challenge of speed noise and complexity alive. In this presentation the following few challenges for test will be further discussed. How is evolving the gap between test methods tools and equipment on one side and technology, design methodology and design tools on the other? How is evolving the cost of production test equipment and as consequence the cost of test? Is there change in the cost of test development? What about high quality and reliability application testing? And last but not least what about research in the test domain during economic crisis.","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128502456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-04-15DOI: 10.1109/DDECS.2009.5012084
G. Gielen
With the advanced scaling of CMOS technology in the nanometer range, highly integrated mixed-signal systems can be designed. The use of nanometer CMOS, however, poses many challenges. This keynote presentation gives an overview of problems due to increased variability and reliability. Both have to be addressed by the designer, either at IC design time or through reconfiguration at IC run time. Design tools for the efficient analysis and identification of reliability problems in analog circuits is described. Also, run-time circuit adaptation techniques are presented that allow a circuit to recover from degradation failures.
{"title":"Design tools and circuit solutions for degradation-resilient analog circuits in nanometer CMOS","authors":"G. Gielen","doi":"10.1109/DDECS.2009.5012084","DOIUrl":"https://doi.org/10.1109/DDECS.2009.5012084","url":null,"abstract":"With the advanced scaling of CMOS technology in the nanometer range, highly integrated mixed-signal systems can be designed. The use of nanometer CMOS, however, poses many challenges. This keynote presentation gives an overview of problems due to increased variability and reliability. Both have to be addressed by the designer, either at IC design time or through reconfiguration at IC run time. Design tools for the efficient analysis and identification of reliability problems in analog circuits is described. Also, run-time circuit adaptation techniques are presented that allow a circuit to recover from degradation failures.","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128131740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-16DOI: 10.1109/DDECS.2008.4538742
K. Flautner
Silicon technology evolution over the last four decades has yielded an exponential increase in integration densities with continual improvements of performance and power consumption at each technology generation. This steady progress has created a sense of entitlement for the riches that future process generations would bring. Today, however, classical process scaling seems to be dead and living up to technology expectations requires continuous innovation at many levels, which comes at steadily progressing implementation and design costs. Solutions to problems need to cut across layers of abstractions and require coordination between software, architecture and circuit features.
{"title":"The Wall Ahead is Made of Rubber","authors":"K. Flautner","doi":"10.1109/DDECS.2008.4538742","DOIUrl":"https://doi.org/10.1109/DDECS.2008.4538742","url":null,"abstract":"Silicon technology evolution over the last four decades has yielded an exponential increase in integration densities with continual improvements of performance and power consumption at each technology generation. This steady progress has created a sense of entitlement for the riches that future process generations would bring. Today, however, classical process scaling seems to be dead and living up to technology expectations requires continuous innovation at many levels, which comes at steadily progressing implementation and design costs. Solutions to problems need to cut across layers of abstractions and require coordination between software, architecture and circuit features.","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115655099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-16DOI: 10.1109/DDECS.2008.4538743
H. Manhaeve
Till now Test has been the cornerstone and final verification step to assure that products are working correct and reliably. We've seen the evolution from functional to structural test and from pass/fail to data concentric test. The growing device and integration complexity causes the test cost do become a dominant factor of the final product cost. This is in conflict with the requirement for always cheaper and better electronics. So what can we do to resolve this conflict? We have seen the evolution from integrated devices to integrated circuits to integrated systems. With ever shrinking transistor dimensions and transistors virtually becoming "for free" the question arises:" do we still need Test or can redundancy cover all?" Integrating systems means further combining hardware, software, analog and digital functionality. Each of these domains as well as there interactions and interfaces poses particular design and test issues. How can we address these and still come up with a cost conscious and reliably working product, thereby also meeting power consumption requirements? Can redundancy tackle all? This presentation will address the questions raised and present a viewpoint on the Future for Test.
{"title":"The Quest for Test: Will Redundancy Cover All?","authors":"H. Manhaeve","doi":"10.1109/DDECS.2008.4538743","DOIUrl":"https://doi.org/10.1109/DDECS.2008.4538743","url":null,"abstract":"Till now Test has been the cornerstone and final verification step to assure that products are working correct and reliably. We've seen the evolution from functional to structural test and from pass/fail to data concentric test. The growing device and integration complexity causes the test cost do become a dominant factor of the final product cost. This is in conflict with the requirement for always cheaper and better electronics. So what can we do to resolve this conflict? We have seen the evolution from integrated devices to integrated circuits to integrated systems. With ever shrinking transistor dimensions and transistors virtually becoming \"for free\" the question arises:\" do we still need Test or can redundancy cover all?\" Integrating systems means further combining hardware, software, analog and digital functionality. Each of these domains as well as there interactions and interfaces poses particular design and test issues. How can we address these and still come up with a cost conscious and reliably working product, thereby also meeting power consumption requirements? Can redundancy tackle all? This presentation will address the questions raised and present a viewpoint on the Future for Test.","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116322575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-04-16DOI: 10.1109/DDECS.2008.4538741
S. Kundu
Scaling of transistor feature size over time has been facilitated by corresponding improvement in lithography technology. However, in recent times the wavelength of the optical light source used for photolithography has not scaled. Starting with 180nm devices, the wavelength of optical source has remained the same at 193nm. Consequently, current and upcoming technology nodes at 65nm, 45nm, 32nm and 22nm will be using a light source with wavelength much greater than the feature size. This creates a peculiar problem where line width on manufactured devices is a function of relative spacing between adjacent lines. Despite numerous restriction on layout rules, interconnects may still suffer from constriction due to this peculiarity also known as forbidden pitch problem. In this talk, we will explore the range of issues that arise from photolithography as they relate to chip testing.
{"title":"The Guiding Light for Chip Testing","authors":"S. Kundu","doi":"10.1109/DDECS.2008.4538741","DOIUrl":"https://doi.org/10.1109/DDECS.2008.4538741","url":null,"abstract":"Scaling of transistor feature size over time has been facilitated by corresponding improvement in lithography technology. However, in recent times the wavelength of the optical light source used for photolithography has not scaled. Starting with 180nm devices, the wavelength of optical source has remained the same at 193nm. Consequently, current and upcoming technology nodes at 65nm, 45nm, 32nm and 22nm will be using a light source with wavelength much greater than the feature size. This creates a peculiar problem where line width on manufactured devices is a function of relative spacing between adjacent lines. Despite numerous restriction on layout rules, interconnects may still suffer from constriction due to this peculiarity also known as forbidden pitch problem. In this talk, we will explore the range of issues that arise from photolithography as they relate to chip testing.","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132323212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-11DOI: 10.1109/DDECS.2007.4295247
K. Chakrabarty
Microfluidics-based biochips are revolutionizing laboratory procedures involving molecular biology. Advances in microfluidics technology offer exciting possibilities for high-throughput DNA sequencing analysis, protein crystallization, drug discovery, immunoassays, and environmental toxicity monitoring. Another emerging application area for microfluidics-based biochips is clinical diagnostics, especially the immediate point-of-care diagnosis of diseases. Defect tolerance is a key requirement for biochips that are used for healthcare and environmental monitoring. There is a need to deliver the same level of computer-aided design (CAD) support to the biochip designer that the semiconductor industry now takes for granted. These CAD tools will allow designers to harness the new technology that is rapidly emerging for integrated biofluidics. This talk will present early work on design and test techniques for microfluidic biochips. The speaker will describe synthesis tools that can map behavioral descriptions to a droplet-based microfluidic biochip and generate an optimized schedule of bioassay operations, the binding of assay operations to functional units, and the layout and droplet flow-paths for the biochip. Cost-effective testing techniques will be presented to detect faults after manufacture and during field operation. It will be shown how on-line and off-line reconfiguration techniques can be used to easily bypass faults once they are detected. Thus the biochip user can concentrate on the development of the nano-and micro-scale bioassays, leaving implementation details to design automation tools.
{"title":"Design and Test of Microfluidic Biochips","authors":"K. Chakrabarty","doi":"10.1109/DDECS.2007.4295247","DOIUrl":"https://doi.org/10.1109/DDECS.2007.4295247","url":null,"abstract":"Microfluidics-based biochips are revolutionizing laboratory procedures involving molecular biology. Advances in microfluidics technology offer exciting possibilities for high-throughput DNA sequencing analysis, protein crystallization, drug discovery, immunoassays, and environmental toxicity monitoring. Another emerging application area for microfluidics-based biochips is clinical diagnostics, especially the immediate point-of-care diagnosis of diseases. Defect tolerance is a key requirement for biochips that are used for healthcare and environmental monitoring. There is a need to deliver the same level of computer-aided design (CAD) support to the biochip designer that the semiconductor industry now takes for granted. These CAD tools will allow designers to harness the new technology that is rapidly emerging for integrated biofluidics. This talk will present early work on design and test techniques for microfluidic biochips. The speaker will describe synthesis tools that can map behavioral descriptions to a droplet-based microfluidic biochip and generate an optimized schedule of bioassay operations, the binding of assay operations to functional units, and the layout and droplet flow-paths for the biochip. Cost-effective testing techniques will be presented to detect faults after manufacture and during field operation. It will be shown how on-line and off-line reconfiguration techniques can be used to easily bypass faults once they are detected. Thus the biochip user can concentrate on the development of the nano-and micro-scale bioassays, leaving implementation details to design automation tools.","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115117010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-11DOI: 10.1109/DDECS.2007.4295248
J. Rajski
Summary form only given. In the past, logic diagnosis was primarily used to support failure analysis labs. It was typically done on a small sample of defective chips, therefore long processing times, manual generation of diagnostic patterns, and usage of expensive equipment was acceptable. In addition to failure analysis, yield learning relied on test chips and in-line inspection. Recently, sub-wavelength lithography processes have started introducing new yield loss mechanisms at a rate, magnitude, and complexity large enough to demand major changes in the process. Test chips are no longer able to represent the various failure mechanisms originating from critical features. The number of such features is too large to properly represent it on silicon in a cost-effective manner. For new processes it is also impossible to predict all significant features up front. With the decreasing sizes of defects and increasing percentage of invisible ones, in-line inspection data is not always available.
{"title":"Logic Diagnosis and Yield Learning","authors":"J. Rajski","doi":"10.1109/DDECS.2007.4295248","DOIUrl":"https://doi.org/10.1109/DDECS.2007.4295248","url":null,"abstract":"Summary form only given. In the past, logic diagnosis was primarily used to support failure analysis labs. It was typically done on a small sample of defective chips, therefore long processing times, manual generation of diagnostic patterns, and usage of expensive equipment was acceptable. In addition to failure analysis, yield learning relied on test chips and in-line inspection. Recently, sub-wavelength lithography processes have started introducing new yield loss mechanisms at a rate, magnitude, and complexity large enough to demand major changes in the process. Test chips are no longer able to represent the various failure mechanisms originating from critical features. The number of such features is too large to properly represent it on silicon in a cost-effective manner. For new processes it is also impossible to predict all significant features up front. With the decreasing sizes of defects and increasing percentage of invisible ones, in-line inspection data is not always available.","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130495847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-04-18DOI: 10.1109/DDECS.2006.1649619
Hsin-Chou Chi, Chia-Ming Wu, Sung-Tze Wu
In this paper, the design of a hybrid switch for on-chip networks in SoC design is presented. This hybrid switch provides both guaranteed and best-effort communication services for network-on-chip architectures. We use the pre-scheduled circuit-switched network to support guaranteed communication service between IPs on the chip. In order to fully utilize the network bandwidth, we further incorporate the packet-switched architecture. Our design has been experimentally implemented using UMC 0.18 mum technology. It has an aggregate bandwidth of 5 times 434MHz times 64 bits = 139 Gb/s. Compared to previous designs, our switch provides high performance with a reasonable cost
{"title":"A Switch Supporting Circuit and Packet Switching for On-Chip Networks","authors":"Hsin-Chou Chi, Chia-Ming Wu, Sung-Tze Wu","doi":"10.1109/DDECS.2006.1649619","DOIUrl":"https://doi.org/10.1109/DDECS.2006.1649619","url":null,"abstract":"In this paper, the design of a hybrid switch for on-chip networks in SoC design is presented. This hybrid switch provides both guaranteed and best-effort communication services for network-on-chip architectures. We use the pre-scheduled circuit-switched network to support guaranteed communication service between IPs on the chip. In order to fully utilize the network bandwidth, we further incorporate the packet-switched architecture. Our design has been experimentally implemented using UMC 0.18 mum technology. It has an aggregate bandwidth of 5 times 434MHz times 64 bits = 139 Gb/s. Compared to previous designs, our switch provides high performance with a reasonable cost","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"22 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132364983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/DDECS57882.2023.10139525
L. Bolzani
The use of Resistive Random Access Memories (RRAMs) for implementing emerging applications depends not only on being able to properly test them after manufacturing, but also being able to guarantee their reliability during lifetime. These novel non-volatile memories can be affected by manufacturing deviations, process variation and defects, as well as by time-dependent deviations, environmental and temporal variations. In this context, this tutorial aims to introduce the main sources of reliability issues at time zero and during lifetime, high-lighting the challenges related to properly identify manufacturing failure mechanisms and consequently, the deviation of more ac-curate fault models. In addition, this tutorial aims to summarize the state-of-the-art regarding manufacturing test strategies and provide a discussion about the key challenges of testing RRAMs at time zero. Finally, this tutorial provides information about how to increase the efficiency of manufacturing test strategies in order to avoid test escapes, which can compromise RRAM’s reliability during lifetime.
{"title":"Embedded Tutorial - RRAMs: How to Guarantee Their Quality Test after Manufacturing?","authors":"L. Bolzani","doi":"10.1109/DDECS57882.2023.10139525","DOIUrl":"https://doi.org/10.1109/DDECS57882.2023.10139525","url":null,"abstract":"The use of Resistive Random Access Memories (RRAMs) for implementing emerging applications depends not only on being able to properly test them after manufacturing, but also being able to guarantee their reliability during lifetime. These novel non-volatile memories can be affected by manufacturing deviations, process variation and defects, as well as by time-dependent deviations, environmental and temporal variations. In this context, this tutorial aims to introduce the main sources of reliability issues at time zero and during lifetime, high-lighting the challenges related to properly identify manufacturing failure mechanisms and consequently, the deviation of more ac-curate fault models. In addition, this tutorial aims to summarize the state-of-the-art regarding manufacturing test strategies and provide a discussion about the key challenges of testing RRAMs at time zero. Finally, this tutorial provides information about how to increase the efficiency of manufacturing test strategies in order to avoid test escapes, which can compromise RRAM’s reliability during lifetime.","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114910476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/DDECS.2007.4295314
Wlodzimierz Jonca
This paper tries to find out whether commonly used spot defect fault model is still viable for deep sub-micron (DSM) integrated circuits' test and yield model. It is believed that for DSM products spot defects may be no longer major source of yield loss. Results from number of computer experiments are presented and discussed.
{"title":"Open Defects Caused by Scratches and Yield Modelling in Deep Sub-micron Integrated Circuit","authors":"Wlodzimierz Jonca","doi":"10.1109/DDECS.2007.4295314","DOIUrl":"https://doi.org/10.1109/DDECS.2007.4295314","url":null,"abstract":"This paper tries to find out whether commonly used spot defect fault model is still viable for deep sub-micron (DSM) integrated circuits' test and yield model. It is believed that for DSM products spot defects may be no longer major source of yield loss. Results from number of computer experiments are presented and discussed.","PeriodicalId":114139,"journal":{"name":"IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems","volume":"674 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114663427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}