High-density and high capacity embedded memories are important components for successful implementation of a system-on-a-chip. Since embedded memory cores usually occupy a large portion of the chip area, they will dominate the manufacturing yield of the system chips. In this paper, a built-in self-test and repair (BISTR) approach is proposed for semiconductor memories with 1-D redundancy (redundant rows) structures. The memory rows are virtually divided into row blocks and reconfiguration is performed at the row block level instead of the traditional row level. That is, the virtual divided word line (VDWL) concept is used for repairing of memory cores. The hardware overhead is almost negligible. An experimental chip is implemented and shows a low area overhead - about 3.06% for a 256 /spl times/ 512 SRAM with 4 spare rows. We also compare the repair rate of our approach with previous memory repair algorithms. It also concludes that our approach improves the repair rate significantly.
{"title":"Built-in self-test and repair (BISTR) techniques for embedded RAMs","authors":"Shyue-Kung Lu, Shih-Chang Huang","doi":"10.1109/MTDT.2004.7","DOIUrl":"https://doi.org/10.1109/MTDT.2004.7","url":null,"abstract":"High-density and high capacity embedded memories are important components for successful implementation of a system-on-a-chip. Since embedded memory cores usually occupy a large portion of the chip area, they will dominate the manufacturing yield of the system chips. In this paper, a built-in self-test and repair (BISTR) approach is proposed for semiconductor memories with 1-D redundancy (redundant rows) structures. The memory rows are virtually divided into row blocks and reconfiguration is performed at the row block level instead of the traditional row level. That is, the virtual divided word line (VDWL) concept is used for repairing of memory cores. The hardware overhead is almost negligible. An experimental chip is implemented and shows a low area overhead - about 3.06% for a 256 /spl times/ 512 SRAM with 4 spare rows. We also compare the repair rate of our approach with previous memory repair algorithms. It also concludes that our approach improves the repair rate significantly.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114223422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper shows how process variation affects memory margin and performance, and shows that in some cases additional redundancy capability can be used to recover yield due to process variation in addition to yield recovery for defects.
{"title":"Redundancy - it's not just for defects any more","authors":"R. Aitken","doi":"10.1109/MTDT.2004.19","DOIUrl":"https://doi.org/10.1109/MTDT.2004.19","url":null,"abstract":"This paper shows how process variation affects memory margin and performance, and shows that in some cases additional redundancy capability can be used to recover yield due to process variation in addition to yield recovery for defects.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124691909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For a long time, single bit error correction (with double bit error detection) has been the mainstay ECC technology for covering soft errors in the cache. From the soft error rate that has been observed (at least terrestrially), people have been content with what single bit correction can offer. For the rare occasion that a double error occurs, ECC will also be able to alert the system and result in a graceful shutdown or otherwise. However, things are changing. As technology scaling continues, we are approaching the point where we will have a billion transistors on a single piece of silicon, with a big part of this budget as memory elements. In a system, the number of memory bits is also on the rise. The scaled technology also brings with it many variations and sensitivities that can cause memory cells to function improperly, or may not function well at certain environmental conditions. Increasingly, ECC is no longer serving as just radiation induced soft error correction, but may be able to affect other forms of fault corrections as well. Will ECC be able to serve this multi-faceted role? Do we need more than single bit error correction? Can we afford the cost of multiple bit error correction? Should we need it? This paper will attempt to answer some of these questions and raise issues with the status quo.
{"title":"Do we need anything more than single bit error correction (ECC)?","authors":"M. Spica, T. M. Mak","doi":"10.1109/MTDT.2004.9","DOIUrl":"https://doi.org/10.1109/MTDT.2004.9","url":null,"abstract":"For a long time, single bit error correction (with double bit error detection) has been the mainstay ECC technology for covering soft errors in the cache. From the soft error rate that has been observed (at least terrestrially), people have been content with what single bit correction can offer. For the rare occasion that a double error occurs, ECC will also be able to alert the system and result in a graceful shutdown or otherwise. However, things are changing. As technology scaling continues, we are approaching the point where we will have a billion transistors on a single piece of silicon, with a big part of this budget as memory elements. In a system, the number of memory bits is also on the rise. The scaled technology also brings with it many variations and sensitivities that can cause memory cells to function improperly, or may not function well at certain environmental conditions. Increasingly, ECC is no longer serving as just radiation induced soft error correction, but may be able to affect other forms of fault corrections as well. Will ECC be able to serve this multi-faceted role? Do we need more than single bit error correction? Can we afford the cost of multiple bit error correction? Should we need it? This paper will attempt to answer some of these questions and raise issues with the status quo.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125586576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnetic tunnel junction magnetoresistive random-access memory (MTJ-MRAM) appears to be in an advanced stage of development at several companies, including Motorola Inc., IBM Corporation, Infineon Technologies and Cypress Semiconductor Corp. MRAM has the potential to become a universal memory technology, with the high speed of SRAM, the nonvolatility of flash memory (but with much greater write-erase endurance than flash memory), and with storage densities that could approach those of DRAM. MRAM is embeddable in conventional CMOS processes with as few as four additional masks. We briefly review early MRAM technologies such as anisotropic MRAM, spin valve MRAM, and pseudo spin valve MRAM. Then we survey both conventional MTJ-MRAM and the recently-developed read-before-write toggle-mode MTJ-MRAM.
{"title":"Tutorial on magnetic tunnel junction magnetoresistive random-access memory","authors":"B. Cockburn","doi":"10.1109/MTDT.2004.24","DOIUrl":"https://doi.org/10.1109/MTDT.2004.24","url":null,"abstract":"Magnetic tunnel junction magnetoresistive random-access memory (MTJ-MRAM) appears to be in an advanced stage of development at several companies, including Motorola Inc., IBM Corporation, Infineon Technologies and Cypress Semiconductor Corp. MRAM has the potential to become a universal memory technology, with the high speed of SRAM, the nonvolatility of flash memory (but with much greater write-erase endurance than flash memory), and with storage densities that could approach those of DRAM. MRAM is embeddable in conventional CMOS processes with as few as four additional masks. We briefly review early MRAM technologies such as anisotropic MRAM, spin valve MRAM, and pseudo spin valve MRAM. Then we survey both conventional MTJ-MRAM and the recently-developed read-before-write toggle-mode MTJ-MRAM.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122446297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rita Zappa, C. Selva, Danilo Rimondi, C. Torelli, M. Crestan, G. Mastrodomenico, L. Albani
A built-in self-repair (BISR) machine is herewith proposed, able to test at speed and repair embedded static random access memories. Unlike the common approach to blow laser-fuse registers, here the repair operation is completely accomplished by the BISR machine, with no external intervene. The information related to the repair operation is stored into an on-chip FLASH memory. The machine is user programmable, since it can test memories of different capacity, architecture and aspect ratio, with up to four test algorithms and two test flows. An "industrial" test flow is intended for production; while, in case of failure, a more complex "screening flow" allows to distinguish whether the unsuccessful repair operation is due to exceeded redundancy capacity or to faulty FLASH programming. This system is aimed to enhance test diagnostic capability and to improve production yield of devices which it is connected to, by-passing the actual losses in time and resources of currently used laser-fuse approach.
{"title":"Micro programmable built-in self repair for SRAMs","authors":"Rita Zappa, C. Selva, Danilo Rimondi, C. Torelli, M. Crestan, G. Mastrodomenico, L. Albani","doi":"10.1109/MTDT.2004.14","DOIUrl":"https://doi.org/10.1109/MTDT.2004.14","url":null,"abstract":"A built-in self-repair (BISR) machine is herewith proposed, able to test at speed and repair embedded static random access memories. Unlike the common approach to blow laser-fuse registers, here the repair operation is completely accomplished by the BISR machine, with no external intervene. The information related to the repair operation is stored into an on-chip FLASH memory. The machine is user programmable, since it can test memories of different capacity, architecture and aspect ratio, with up to four test algorithms and two test flows. An \"industrial\" test flow is intended for production; while, in case of failure, a more complex \"screening flow\" allows to distinguish whether the unsuccessful repair operation is due to exceeded redundancy capacity or to faulty FLASH programming. This system is aimed to enhance test diagnostic capability and to improve production yield of devices which it is connected to, by-passing the actual losses in time and resources of currently used laser-fuse approach.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130049493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In present day system-on-chips (SOC), a large part (/spl sim/70%) is occupied by memories. The overall yield of the SoC relies heavily on the memory yield. To minimize the test and diagnosis effort, we present a system for silicon configurable test flow and algorithms for different types of memories including multi-port memories, through a shared controller. It supports manufacturing tests as well as diagnosis and electrical AC characterisation of memories. With low area overhead, the proposed microcode based configurable controller gives the test engineer freedom to do complete testing on-chip with few micro-codes.
{"title":"A novel method for silicon configurable test flow and algorithms for testing, debugging and characterizing different types of embedded memories through a shared controller","authors":"Swapnil Bahl","doi":"10.1109/MTDT.2004.2","DOIUrl":"https://doi.org/10.1109/MTDT.2004.2","url":null,"abstract":"In present day system-on-chips (SOC), a large part (/spl sim/70%) is occupied by memories. The overall yield of the SoC relies heavily on the memory yield. To minimize the test and diagnosis effort, we present a system for silicon configurable test flow and algorithms for different types of memories including multi-port memories, through a shared controller. It supports manufacturing tests as well as diagnosis and electrical AC characterisation of memories. With low area overhead, the proposed microcode based configurable controller gives the test engineer freedom to do complete testing on-chip with few micro-codes.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121946343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bit line twisting is an effective design method commonly used to reduce the impact of bit line coupling noise in high density memory devices. This paper investigates the way bit line twisting influences the faulty behavior of DRAMs, based on an analytical evaluation of coupling effects on the one hand, and a simulation-based fault analysis using a Spice simulation model on the other. Two different DRAM twisting schemes, in addition to a third untwisted bit line scheme, are presented and analyzed. Both the analytical and the simulation-based evaluation results show that each scheme has its own specific impact on the faulty behavior. The same approach presented in the paper can be used to analyze the impact of other bit line twisting schemes on the memory faulty behavior.
{"title":"Influence of bit line twisting on the faulty behavior of DRAMs","authors":"Z. Al-Ars, M. Herzog, I. Schanstra, A. V. Goor","doi":"10.1109/MTDT.2004.12","DOIUrl":"https://doi.org/10.1109/MTDT.2004.12","url":null,"abstract":"Bit line twisting is an effective design method commonly used to reduce the impact of bit line coupling noise in high density memory devices. This paper investigates the way bit line twisting influences the faulty behavior of DRAMs, based on an analytical evaluation of coupling effects on the one hand, and a simulation-based fault analysis using a Spice simulation model on the other. Two different DRAM twisting schemes, in addition to a third untwisted bit line scheme, are presented and analyzed. Both the analytical and the simulation-based evaluation results show that each scheme has its own specific impact on the faulty behavior. The same approach presented in the paper can be used to analyze the impact of other bit line twisting schemes on the memory faulty behavior.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122268970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A single event upset (SEU) can affect the correct operation of digital systems, such as memories and processors. This paper proposes Markov based models for analyzing the reliability and availability of different fault-tolerant memory arrangements under the operational scenario of an SEU. These arrangements exploit redundancy (either duplex or triplex replication) for dynamic fault-tolerant operation as provided by arbitration (for error detection and output selection) as well as in the presence of dedicated circuitry implementing different correction/detection codes for bit-flips as errors. The primary objective is to preserve either the correctness, or the fail-safe nature of the data output of the memory system for long mission time. It is shown that a duplex memory system encoded with error control codes has a higher reliability than the triplex arrangement. Moreover, the use of a code for single error correction and double error detection (SEC-DED) improves both availability and reliability compared to an error correction code with same error detection capabilities.
{"title":"Markov models of fault-tolerant memory systems under SEU","authors":"L. Schiano, M. Ottavi, F. Lombardi","doi":"10.1109/MTDT.2004.13","DOIUrl":"https://doi.org/10.1109/MTDT.2004.13","url":null,"abstract":"A single event upset (SEU) can affect the correct operation of digital systems, such as memories and processors. This paper proposes Markov based models for analyzing the reliability and availability of different fault-tolerant memory arrangements under the operational scenario of an SEU. These arrangements exploit redundancy (either duplex or triplex replication) for dynamic fault-tolerant operation as provided by arbitration (for error detection and output selection) as well as in the presence of dedicated circuitry implementing different correction/detection codes for bit-flips as errors. The primary objective is to preserve either the correctness, or the fail-safe nature of the data output of the memory system for long mission time. It is shown that a duplex memory system encoded with error control codes has a higher reliability than the triplex arrangement. Moreover, the use of a code for single error correction and double error detection (SEC-DED) improves both availability and reliability compared to an error correction code with same error detection capabilities.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124056828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to the International Technology Roadmap for Semiconductors (ITRS 2001), embedded memories will continue to dominate the increasing system on chips (SoCs) content in the next years, approaching 94% in about 10 years. Therefore the memory yield will have a dramatical impact on the overall defect-per-million (DPM) level, hence on the overall SoC yield. Meeting a high memory yield requires understanding memory designs, modelling their faulty behaviors in the presence of defects, designing adequate tests and diagnosis strategies as well as efficient repair schemes. This paper presents the state of art in memory testing including fault modeling, test design, built-in-self-test (BIST) and built-in-self-repair (BISR). Further research challenges and opportunities are discussed in enabling testing (embedded) memories, which use deep submicron technologies.
{"title":"The state-of-art and future trends in testing embedded memories","authors":"S. Hamdioui, G. Gaydadjiev, A. V. Goor","doi":"10.1109/MTDT.2004.23","DOIUrl":"https://doi.org/10.1109/MTDT.2004.23","url":null,"abstract":"According to the International Technology Roadmap for Semiconductors (ITRS 2001), embedded memories will continue to dominate the increasing system on chips (SoCs) content in the next years, approaching 94% in about 10 years. Therefore the memory yield will have a dramatical impact on the overall defect-per-million (DPM) level, hence on the overall SoC yield. Meeting a high memory yield requires understanding memory designs, modelling their faulty behaviors in the presence of defects, designing adequate tests and diagnosis strategies as well as efficient repair schemes. This paper presents the state of art in memory testing including fault modeling, test design, built-in-self-test (BIST) and built-in-self-repair (BISR). Further research challenges and opportunities are discussed in enabling testing (embedded) memories, which use deep submicron technologies.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131043459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hundreds of memory cores can be found on a typical system-on-chip (SOC) today. Diagnosing such a large number of memory cores using a conventional built-in self-test (BIST) architecture consumes too much time, as its on-chip diagnostics support is for sequential diagnosis only. In this paper, we present a memory BIST architecture with parallel diagnosis scheme. The proposed parallel built-in self-diagnosis (PBISD) scheme was developed to work with our existing memory optimization and reconfiguration (MORE) system, which configures small memory cores into the large one specified by the user, subject to the power and geometry constraints. With PBISD and MORE, memory test and diagnosis can be done in a much shorter time, and the whole system provides a good balance among test time, test power, and test hardware overhead. Experimental results show that, when compared with a conventional BISD scheme, the diagnosis time for a case with four memory cores is only 25%. Moreover, the area overhead is only 49%, as only one test pattern generator is required.
{"title":"A parallel built-in diagnostic scheme for multiple embedded memories","authors":"Li-Ming Denq, Rei-Fu Huang, Cheng-Wen Wu, Yeong-Jar Chang, Wen Ching Wu","doi":"10.1109/MTDT.2004.3","DOIUrl":"https://doi.org/10.1109/MTDT.2004.3","url":null,"abstract":"Hundreds of memory cores can be found on a typical system-on-chip (SOC) today. Diagnosing such a large number of memory cores using a conventional built-in self-test (BIST) architecture consumes too much time, as its on-chip diagnostics support is for sequential diagnosis only. In this paper, we present a memory BIST architecture with parallel diagnosis scheme. The proposed parallel built-in self-diagnosis (PBISD) scheme was developed to work with our existing memory optimization and reconfiguration (MORE) system, which configures small memory cores into the large one specified by the user, subject to the power and geometry constraints. With PBISD and MORE, memory test and diagnosis can be done in a much shorter time, and the whole system provides a good balance among test time, test power, and test hardware overhead. Experimental results show that, when compared with a conventional BISD scheme, the diagnosis time for a case with four memory cores is only 25%. Moreover, the area overhead is only 49%, as only one test pattern generator is required.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131819734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}