C. Selva, C. Torelli, Danilo Rimondi, Rita Zappa, S. Corbani, G. Mastrodomenico, L. Albani
In this work we present a built-in self-diagnosis (BISD) module, an integrated solution for the fault diagnosis of embedded memories. The BISD methodology proposed includes a built-in self-test block and an additional circuitry to perform the on-chip failure analysis in order to detect the main defects. The fault diagnosis system developed is aimed to the maturation of the technology as well as to the diagnosis of circuits in case of sudden yield drop. The fault diagnosis is a key factor for the technology maturation. New technologies require a certain time to get stability before being used for massive production. On the other hand, problems of sudden yield drop can occur also when the technology is stable. In this case a fast recovery on yield is required. This BISD module is highly re-configurable, its main characteristics are the programmability with different test algorithms, the flexibility with respect to memory sizes and address scrambling and the reconfigurability with respect to the part of the array to diagnose. The BISD block has been implemented in a 0.13 /spl mu/m flash technology with a 512Kbit SRAM, it has an area overhead of 13% and its maximum operation frequency is 150MHz.
{"title":"A programmable built-in self-diagnosis for embedded SRAM","authors":"C. Selva, C. Torelli, Danilo Rimondi, Rita Zappa, S. Corbani, G. Mastrodomenico, L. Albani","doi":"10.1109/MTDT.2004.4","DOIUrl":"https://doi.org/10.1109/MTDT.2004.4","url":null,"abstract":"In this work we present a built-in self-diagnosis (BISD) module, an integrated solution for the fault diagnosis of embedded memories. The BISD methodology proposed includes a built-in self-test block and an additional circuitry to perform the on-chip failure analysis in order to detect the main defects. The fault diagnosis system developed is aimed to the maturation of the technology as well as to the diagnosis of circuits in case of sudden yield drop. The fault diagnosis is a key factor for the technology maturation. New technologies require a certain time to get stability before being used for massive production. On the other hand, problems of sudden yield drop can occur also when the technology is stable. In this case a fast recovery on yield is required. This BISD module is highly re-configurable, its main characteristics are the programmability with different test algorithms, the flexibility with respect to memory sizes and address scrambling and the reconfigurability with respect to the part of the array to diagnose. The BISD block has been implemented in a 0.13 /spl mu/m flash technology with a 512Kbit SRAM, it has an area overhead of 13% and its maximum operation frequency is 150MHz.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126642613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Adams, R. Abbott, Xiaoliang Bai, D. Burek, E. MacDonald
Memory built-in self-test (BIST) is a critical portion of the chip design and electronic design automation (EDA) flow. A BIST tool needs to understand the memory at the topological and layout levels in order to test for the correct fault models. The BIST also needs to be fully integrated into the overall EDA flow in order to have the least impact on chip area and have the greatest ease of use to the chip designer.
{"title":"An integrated memory self test and EDA solution","authors":"R. Adams, R. Abbott, Xiaoliang Bai, D. Burek, E. MacDonald","doi":"10.1109/MTDT.2004.5","DOIUrl":"https://doi.org/10.1109/MTDT.2004.5","url":null,"abstract":"Memory built-in self-test (BIST) is a critical portion of the chip design and electronic design automation (EDA) flow. A BIST tool needs to understand the memory at the topological and layout levels in order to test for the correct fault models. The BIST also needs to be fully integrated into the overall EDA flow in order to have the least impact on chip area and have the greatest ease of use to the chip designer.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127094301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose a replacement algorithm, SF-LRU (second chance-frequency - least recently used) that combines the LRU (least recently used) and the LFU (least frequently used) using the second chance concept. A comprehensive comparison is made between our algorithm and both LRU and LFU algorithms. Experimental results show that the SF-LRU significantly reduces the number of cache misses compared the other two algorithms. Simulation results show that our algorithm can provide a maximum value of approximately 6.3% improvement in the miss ratio over the LRU algorithm in data cache and approximately 9.3% improvement in miss ratio in instruction cache. This performance improvement is attributed to the fact that our algorithm provides a second chance to the block that may be deleted according to LRU's rules. This is done by comparing the frequency of the block with the block next to it in the set.
{"title":"SF-LRU cache replacement algorithm","authors":"J. Alghazo, A. Akaaboune, N. Botros","doi":"10.1109/MTDT.2004.20","DOIUrl":"https://doi.org/10.1109/MTDT.2004.20","url":null,"abstract":"In this paper we propose a replacement algorithm, SF-LRU (second chance-frequency - least recently used) that combines the LRU (least recently used) and the LFU (least frequently used) using the second chance concept. A comprehensive comparison is made between our algorithm and both LRU and LFU algorithms. Experimental results show that the SF-LRU significantly reduces the number of cache misses compared the other two algorithms. Simulation results show that our algorithm can provide a maximum value of approximately 6.3% improvement in the miss ratio over the LRU algorithm in data cache and approximately 9.3% improvement in miss ratio in instruction cache. This performance improvement is attributed to the fact that our algorithm provides a second chance to the block that may be deleted according to LRU's rules. This is done by comparing the frequency of the block with the block next to it in the set.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127848728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drastic decreases in device dimensions and power supply have significantly reduced noise margins and challenged the reliability of very deep-submicron chips. Soft error rate is the main cause behind this challenge. Even though both logic block and embedded memories are impacted by this challenge, but the failure rate in embedded memories remains dominant and requires infrastructure IP for self-correctness. ECC is such an IP. It operates in the field during normal mode operation of a chip. The infrastructure IP in this case need to be fully integrated with the functional memory IP. This allows for timing and area optimization and provides protection throughout the life cycle. This work discusses the growing SER challenge and discusses the integrated IP approach to help resolve it.
{"title":"Embedded memory reliability: the SER challenge","authors":"N. Derhacobian, V. Vardanian, Y. Zorian","doi":"10.1109/MTDT.2004.10","DOIUrl":"https://doi.org/10.1109/MTDT.2004.10","url":null,"abstract":"Drastic decreases in device dimensions and power supply have significantly reduced noise margins and challenged the reliability of very deep-submicron chips. Soft error rate is the main cause behind this challenge. Even though both logic block and embedded memories are impacted by this challenge, but the failure rate in embedded memories remains dominant and requires infrastructure IP for self-correctness. ECC is such an IP. It operates in the field during normal mode operation of a chip. The infrastructure IP in this case need to be fully integrated with the functional memory IP. This allows for timing and area optimization and provides protection throughout the life cycle. This work discusses the growing SER challenge and discusses the integrated IP approach to help resolve it.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121192761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many industrial experiments have shown that the very simple and time-efficient Scan test detects many unique faults. This paper shines a new light on the properties of Scan test; such properties will be evaluated using industrial data. In addition, it will be shown that many faults in a memory, which are not in the cell array, are detectable using the appropriate read-write sequences. The traditional version of Scan test performs 'some' of such read-write sequences, but lacks the capability of performing all of them for both the 'up' and the 'down' address orders and the '0' and the '1' data values. Therefore a new set of scan based tests are proposed to fill that vacuum.
{"title":"The effectiveness of the scan test and its new variants","authors":"A. V. Goor, S. Hamdioui, Z. Al-Ars","doi":"10.1109/MTDT.2004.22","DOIUrl":"https://doi.org/10.1109/MTDT.2004.22","url":null,"abstract":"Many industrial experiments have shown that the very simple and time-efficient Scan test detects many unique faults. This paper shines a new light on the properties of Scan test; such properties will be evaluated using industrial data. In addition, it will be shown that many faults in a memory, which are not in the cell array, are detectable using the appropriate read-write sequences. The traditional version of Scan test performs 'some' of such read-write sequences, but lacks the capability of performing all of them for both the 'up' and the 'down' address orders and the '0' and the '1' data values. Therefore a new set of scan based tests are proposed to fill that vacuum.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123832011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work explores the design and analysis of an error-correcting circuit as applied to high density and low latency memories, especially NOR Flash and DRAM. In very high density semiconductor memory products, exhaustive testing and repair procedures are essential to insure the proper operation of every memory location under worst possible conditions and can account for a significant portion of the total production cost. The implementation of error-correcting circuits in conjunction with other currently-used methods for designing more fault-tolerant high density memory could allow for more simplified testing procedures after memory fabrication and significantly reduce the overall cost. Also, error-correcting circuits could increase the reliability of the memory and extend its lifetime. This paper illustrates one possible implementation of error-correcting circuits, in the form of a Hamming decoder. Clocking was accomplished with asynchronous pulse generators to ensure fast cycle times and minimal decoding delay. These circuits were designed to show that error correction can be achieved with minimal additional circuitry, system complexity, power consumption and latency.
{"title":"Fast error-correcting circuits for fault-tolerant memory","authors":"E. Ou, Woodward Yang","doi":"10.1109/MTDT.2004.11","DOIUrl":"https://doi.org/10.1109/MTDT.2004.11","url":null,"abstract":"This work explores the design and analysis of an error-correcting circuit as applied to high density and low latency memories, especially NOR Flash and DRAM. In very high density semiconductor memory products, exhaustive testing and repair procedures are essential to insure the proper operation of every memory location under worst possible conditions and can account for a significant portion of the total production cost. The implementation of error-correcting circuits in conjunction with other currently-used methods for designing more fault-tolerant high density memory could allow for more simplified testing procedures after memory fabrication and significantly reduce the overall cost. Also, error-correcting circuits could increase the reliability of the memory and extend its lifetime. This paper illustrates one possible implementation of error-correcting circuits, in the form of a Hamming decoder. Clocking was accomplished with asynchronous pulse generators to ensure fast cycle times and minimal decoding delay. These circuits were designed to show that error correction can be achieved with minimal additional circuitry, system complexity, power consumption and latency.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132322786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of group (or bit) write enable in memories is becoming very common in embedded memories. The circuitry used to achieve these functions need be thoroughly tested for different kind of defects using specific test sequence. However, most BIST algorithms assume that these write enables are forced active during the global write cycle in the BIST run. This paper presents a serial interface BIST algorithm that is used to test defect on bit/group write enables of these memories.
{"title":"A BIST algorithm for bit/group write enable faults in SRAMs","authors":"S. Adham, B. Nadeau-Dostie","doi":"10.1109/MTDT.2004.1","DOIUrl":"https://doi.org/10.1109/MTDT.2004.1","url":null,"abstract":"The use of group (or bit) write enable in memories is becoming very common in embedded memories. The circuitry used to achieve these functions need be thoroughly tested for different kind of defects using specific test sequence. However, most BIST algorithms assume that these write enables are forced active during the global write cycle in the BIST run. This paper presents a serial interface BIST algorithm that is used to test defect on bit/group write enables of these memories.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127208097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a robust technique to reduce the power consumption for a 4-way set-associativity cache. Our algorithm is a modification of the technique proposed by H. Choi et al. (2000) which allows skipping tag look-ups to achieve a better power consumption design. Previous work shows that implementing tag-skipping technique on a Not-Load-on-write-miss architecture, though reduces the overall power consumption, yet still consumes significant power in write miss by frequently accessing main memory. We propose the use of a write tag-skipping (WTS) buffer (WTSB) to reduce the number of write misses by 50-85% therefore reducing accesses to more power consuming devices such as main memory. This results in shifting all tag-skipping operations occurring during a miss to a hit.
{"title":"Tag skipping technique using WTS buffer for optimal low power cache design","authors":"A. Akaaboune, N. Botros, J. Alghazo","doi":"10.1109/MTDT.2004.21","DOIUrl":"https://doi.org/10.1109/MTDT.2004.21","url":null,"abstract":"In this paper we present a robust technique to reduce the power consumption for a 4-way set-associativity cache. Our algorithm is a modification of the technique proposed by H. Choi et al. (2000) which allows skipping tag look-ups to achieve a better power consumption design. Previous work shows that implementing tag-skipping technique on a Not-Load-on-write-miss architecture, though reduces the overall power consumption, yet still consumes significant power in write miss by frequently accessing main memory. We propose the use of a write tag-skipping (WTS) buffer (WTSB) to reduce the number of write misses by 50-85% therefore reducing accesses to more power consuming devices such as main memory. This results in shifting all tag-skipping operations occurring during a miss to a hit.","PeriodicalId":415606,"journal":{"name":"Records of the 2004 International Workshop on Memory Technology, Design and Testing, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128899010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}